diff --git a/README.md b/README.md index 2c62969b..5ef3c6a5 100644 --- a/README.md +++ b/README.md @@ -1,20 +1,10 @@ # Wingman AI Core -Wingman AI allows you to use your voice to talk to various AI providers and LLMs, process your conversations, and ultimately trigger actions such as pressing buttons or reading answers. Our _Wingmen_ are like characters and your interface to this world, and you can easily control their behavior and characteristics, even if you're not a developer. - -**1.5.0 Showreel:** +Official website: [https://www.wingman-ai.com](https://www.wingman-ai.com) [![Wingman AI 1.5 Showreel](https://img.youtube.com/vi/qR8FjmQJRGE/0.jpg)](https://youtu.be/qR8FjmQJRGE 'Wingman AI Showreel') -**Release trailer:** - -[![Wingman AI 1.0 Release Trailer](https://img.youtube.com/vi/HR1Zc9QD1jE/0.jpg)](https://www.youtube.com/watch?v=HR1Zc9QD1jE 'Wingman AI Release Trailer') - -**In-depth tutorial:** - -[![Wingman AI 1.0 Tutorial](https://img.youtube.com/vi/--GkXcA5msw/0.jpg)](https://www.youtube.com/watch?v=--GkXcA5msw 'Wingman AI Tutorial') - -AI is complex and it scares people. It's also **not just ChatGPT**. We want to make it as easy as possible for you to get started. That's what _Wingman AI_ is all about. It's a **framework** that allows you to build your own Wingmen and use them in your games and programs. +Wingman AI allows you to use your voice to talk to various AI providers and LLMs, process your conversations, and ultimately trigger actions such as pressing buttons or reading answers. Our _Wingmen_ are like characters and your interface to this world, and you can easily control their behavior and characteristics, even if you're not a developer. AI is complex and it scares people. It's also **not just ChatGPT**. We want to make it as easy as possible for you to get started. That's what _Wingman AI_ is all about. It's a **framework** that allows you to build your own Wingmen and use them in your games and programs. ![Wingman Flow](assets/wingman-flow.png) @@ -29,20 +19,35 @@ The idea is simple, but the possibilities are endless. For example, you could: ## Features +
+ +
+ Since version 2.0, Wingman AI Core acts as a "backend" API (using FastAPI and Pydantic) with the following features: - **Push-to-talk or voice activation** to capture user audio -- OpenAI **text generation** and **function calling** -- **Speech-to-text** providers (STT) for transcription: +- **AI providers** with different models: + - OpenAI + - Google (Gemini) + - Azure + - Groq (llama3 with function calling) + - Mistral Cloud + - Open Router + - Cerebras + - Groq + - Perplexity + - Wingman Pro (unlimited access to several providers and models) +- **Speech-to-text providers** (STT) for transcription: - OpenAI Whisper - - OpenAI Whisper via Azure + - Azure Whisper - Azure Speech - - whispercpp (local) + - whispercpp (local, bundled with Wingman AI) + - Wingman Pro (Azure Speech or Azure Whisper) - **Text-to-speech** (TTS) providers: - OpenAI TTS - Azure TTS - - Elevenlabs - Edge TTS (free) + - Elevenlabs - XVASynth (local) - **Sound effects** that work with every supported TTS provider - **Multilingual** by default @@ -54,10 +59,14 @@ Since version 2.0, Wingman AI Core acts as a "backend" API (using FastAPI and Py - **Skills** that can do almost anything. Think Alexa... but better. - **directory/file-based configuration** for different use cases (e.g. games) and Wingmen. No database needed. - Wingman AI Core exposes a lot of its functionality via **REST services** (with an OpenAPI/Swagger spec) and can send and receive messages from clients, games etc. using **WebSockets**. +- Sound Library to play mp3 or wav files in commands or Skills (similar to HCS Voice Packs for Voice Attack) +- AI instant sound effects generation with Elevenlabs We (Team ShipBit) offer an additional [client with a neat GUI](https://www.wingman-ai.com) that you can use to configure everything in Wingman AI Core. - +
+ +
## Is this a "Star Citizen" thing? @@ -110,11 +119,11 @@ Our Wingmen use OpenAI's APIs and they charge by usage. That means: You don't pa #### ElevenLabs -You don't have to use [ElevenLabs](https://elevenlabs.io/) as TTS provider, but their voices are great. You can also clone your own with less than 5 minutes of sample audio, e.g. your friend, an actor or a recording of an NPC in your game. +You don't have to use Elevenlabs as TTS provider, but their voices are great and you can generate instant sound effects with their API - fully integrated into Wingman AI. You can clone any voice with 3 minutes of clean audio, e.g. your friend, an actor or a recording of an NPC in your game. -They have a free tier with a limited number of characters generated per month so you can try it out first. You can find more information on their [pricing page](https://elevenlabs.io/pricing). +Elevenlabs offers a $5 tier with 30k characters and a $22 tier with 100k characters. Characters roll over each month with a max of 3 months worth of credits. If you're interested in the service, please consider using our [referral link here](https://elevenlabs.io/pricing?from=partnerlewis2510). It costs you nothing extra and supports Wingman AI. We get 22% of all payments in your first year. Thank you! -Signing up is very similar to OpenAI: Create your account, set up your payment method, and create an API key. +Signing up is very similar to OpenAI: Create your account, set up your payment method, and create an API key. Enter that API key in Wingman AI when asked. #### Edge TTS (Free) @@ -122,9 +131,7 @@ Microsoft Edge TTS is actually free and you don't need an API key to use it. How ### Are local LLMs replacing OpenAI supported? -Wingman AI exposes the `base_url` property that the OpenAI Python client uses. So if you have a plug-in replacement for OpenAI's client, you can easily connect it to Wingman AI Core. You can also write your own custom Wingman that uses your local LLM. - -Integrating specific LLMs oder models is currently not on our (ShipBit) priority list [as explained here](https://github.com/ShipBit/wingman-ai/issues/108) and we do not offer live support for it. Check out or Discord server if you're interested in local LLMs - there is a vibrant community discussing and testing different solutions and if we ever find one that satisfies our requirements, we might consider supporting it officially. +You can use any LLM offering an OpenAI-compatible API and connect it to Wingman AI Core easily. ## Installing Wingman AI @@ -133,6 +140,7 @@ Integrating specific LLMs oder models is currently not on our (ShipBit) priority - Download the installer of the latest version from [wingman-ai.com](https://www.wingman-ai.com). - Install it to a directory of your choice and start the client `Wingman AI.exe`. - The client will will auto-start `Wingman AI Core.exe` in the background + - The client will auto-start `whispercpp` in the background. If you have an NVIDIA RTX GPU, install the latest CUDA driver from NVIDIA and enable GPU acceleration in the Settings view. If that doesn't work for some reason, try starting `Wingman AI Core.exe` manually and check the terminal or your **logs** directory for errors. @@ -142,29 +150,23 @@ If that doesn't work for some reason, try starting `Wingman AI Core.exe` manuall Wingman runs well on MacOS. While we don't offer a precompiled package for it, you can [run it from source](#develop-with-wingman-ai). Note that the TTS provider XVASynth is Windows-only and therefore not supported on MacOS. -### Linux - -Linux is not officially supported but some of our community members were able to run it anyways. Check out [their documentation](docs/develop-linux.md). - ## Who are these Wingmen? -Our default Wingmen serve as examples and starting points for your own Wingmen, and you can easily reconfigure them using the client. You can also add your own Wingmen. +Our default Wingmen serve as examples and starting points for your own Wingmen, and you can easily reconfigure them using the client. You can also add your own Wingmen very easily. ### Computer & ATC Our first two default Wingmen are using OpenAI's APIs. The basic process is as follows: - Your speech is transcribed by the configured TTS provider. -- The transcript is then sent as text to the **GPT-3.5 Turbo API**, which responds with a text and maybe function calls. -- Wingman AI Core executes function calls which equals a command execution. +- The transcript is then sent as text to the configured LLM, which responds with text and maybe function calls. +- Wingman AI Core executes function calls which can be command executions or skill functions. - The response is then read out to you by the configured TTS provider. - Clients connected to Wingman AI Core are notified about progress and changes live and display them in the UI. -Talking to a Wingman is like chatting with ChatGPT. This means that you can customize their behavior by giving them a `context` (or `system`) prompt as starting point for your conversation. You can also just tell them how to behave and they will remember that during your conversation. ATC and Computer use very different prompts, so they behave very differently. - -The magic happens when you configure _commands_ or key bindings. GPT will then try to match your request with the configured commands and execute them for you. It will automatically choose the best matching command based only on its name, so make sure you give it a good one (e.g. `RequestLandingPermission`). +Talking to a Wingman is like chatting with ChatGPT but with your voice. And it can actually do everything that Python can do. This means that you can customize their behavior by giving them a backstory as starting point for your conversation. You can also just tell them how to behave and they will remember that during your conversation. -More information about the API can be found in the [OpenAI API documentation](https://beta.openai.com/docs/introduction). +The magic happens when you configure _commands_ or key bindings. GPT will then try to match your request with the configured commands and execute them for you. It will automatically choose the best matching command based only on its name, so make sure you give it a good one (e.g. `Request landing permission`). ### StarHead @@ -183,17 +185,17 @@ For updates and more information, visit the [StarHead website](https://star-head ### Noteworthy community projects -- [UEXCorp](https://discord.com/channels/1173573578604687360/1179594417926066196) by @JayMatthew: A former Custom Wingman, now Skill that utilizes the UEX Corp API to pull live data for Star Citizen. Think StarHead on steroids. -- [Clippy](https://discord.com/channels/1173573578604687360/1241854342282219662) by @teddybear082: A tribute Skill to the sketchy Microsoft assistant we all used to hate. -- [WebSearch](https://discord.com/channels/1173573578604687360/1245432544946688081) by @teddybear082: A Skill that can pull data from websites (and quote the sources) for you. +- [Community Wingmen](https://discord.com/channels/1173573578604687360/1176141176974360627) +- [Community Skills](https://discord.com/channels/1173573578604687360/1254811139867410462) +- [Different Games with Wingman AI](https://discord.com/channels/1173573578604687360/1254868009940418572) ## Can I configure Wingman AI Core without using your client? Yes, you can! You can edit all the configs in your `%APP_DATA%\ShipBit\WingmanAI\[version]` directory. -The YAML configs are very indentation-sensitive, so please be careful. We recommend using [VSCode](https://code.visualstudio.com/) with the [YAML extension](https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml) to edit them. +The YAML configs are very indentation-sensitive, so please be careful. -**There is no hot reloading**, so you have to restart Wingman AI Core after you made changes to the configs. +**There is no hot reloading**, so you have to restart Wingman AI Core after you made manual changes to the configs. ### Directory/file-based configuration @@ -217,15 +219,13 @@ Access secrets in code by using `secret_keeper.py`. You can access everything el Wingman supports all languages that OpenAI (or your configured AI provider) supports. Setting this up in Wingman is really easy: -Find the `context` setting for the Wingman you want to change. +Some STT providers need a simple configuration to specifiy a non-English language. Use might also have to find a voice that speaks the desired language. -Now add a simple sentence to the `context` prompt: `Always answer in the language I'm using to talk to you.` +Then find the `backstory` setting for the Wingman you want to change and add a simple sentence to the `backstory` prompt: `Always answer in the language I'm using to talk to you.` or something like `Always answer in Portuguese.` The cool thing is that you can now trigger commands in the language of your choice without changing/translating the `name` of the commands - the AI will do that for you. -Also note that depending on your TTS provider, you might have to pick a voice that can actually speak your desired language or you'll end up with something really funny (like an American voice trying to speak German). - ## Develop with Wingman AI Are you ready to build your own Wingman or implement new features to the framework? @@ -279,7 +279,7 @@ This list will inevitably remain incomplete. If you miss your name here, please #### Special thanks -- [**JayMatthew aka SawPsyder**](https://robertsspaceindustries.com/citizens/JayMatthew), @teddybear082 and @Thaendril for outstanding moderation in Discord, constant feedback and valuable Core & Skill contributions +- [**JayMatthew aka SawPsyder**](https://robertsspaceindustries.com/citizens/JayMatthew), @teddybear082, @Thaendril and @Xul for outstanding moderation in Discord, constant feedback and valuable Core & Skill contributions - @lugia19 for developing and improving the amazing [elevenlabslib](https://github.com/lugia19/elevenlabslib). - [Knebel](https://www.youtube.com/@Knebel_DE) who helped us kickstart Wingman AI by showing it on stream and grants us access to the [StarHead API](https://star-head.de/) for Star Citizen. - @Zatecc from [UEX Corp](https://uexcorp.space/) who supports our community developers and Wingmen with live trading data for Star Citizen using the [UEX Corp API](https://uexcorp.space/api.html). @@ -300,7 +300,3 @@ To our greatest Patreon supporters we say: `o7` Commanders! - Paradox - Gopalfreak aka Rockhound - [Averus](https://robertsspaceindustries.com/citizens/Averus) - -#### Wingmen (Patreons) - -[Ira Robinson aka Serene/BlindDadDoes](http://twitch.tv/BlindDadDoes), Zenith, DiVille, [Hiwada], Hades aka Architeutes, Raziel317, [CptToastey](https://www.twitch.tv/cpttoastey), NeyMR AKA EagleOne (Capt.Epic), a Bit Brutal, AlexeiX, [Dragon Aura](https://robertsspaceindustries.com/citizens/Dragon_Aura), Perry-x-Rhodan, DoublarThackery, SilentCid, Bytebool, Exaust A.K.A Nikoyevitch, Tycoon3000, N.T.G, Jolan97, Greywolfe, [Dayel Ostraco aka BlakeSlate](https://dayelostra.co/), Nielsjuh01, Manasy, Sierra-Noble, Simon says aka Asgard, JillyTheSnail, [Admiral-Chaos aka Darth-Igi], The Don, Tristan Import Error, Munkey the pirate, Norman Pham aka meMidgety, [meenie](https://github.com/meenie), [Tilawan](https://github.com/jlaluces123), Mr. Moo42, Geekdomo, Jenpai, Blitz, [Aaron Sadler](https://github.com/AaronSadler687), [SleeperActual](https://vngd.net/), parawho, [HypeMunkey](https://robertsspaceindustries.com/citizens/HypeMunkey), Huniken, SuperTruck, [NozDog], Skipster [Skipster Actual], Fredek, Ruls-23, Dexonist, Captain Manga diff --git a/api/commands.py b/api/commands.py index 1fb9134b..d551591f 100644 --- a/api/commands.py +++ b/api/commands.py @@ -26,11 +26,12 @@ class SaveSecretCommand(WebSocketCommandModel): class RecordKeyboardActionsCommand(WebSocketCommandModel): command: Literal["record_keyboard_actions"] = "record_keyboard_actions" - recording_type: KeyboardRecordingType = KeyboardRecordingType.SINGLE + recording_type: KeyboardRecordingType class StopRecordingCommand(WebSocketCommandModel): command: Literal["stop_recording"] = "stop_recording" + recording_type: KeyboardRecordingType # SENT TO CLIENT diff --git a/api/enums.py b/api/enums.py index 655cb14d..091bb624 100644 --- a/api/enums.py +++ b/api/enums.py @@ -50,6 +50,7 @@ class CustomPropertyType(Enum): SINGLE_SELECT = "single_select" VOICE_SELECTION = "voice_selection" SLIDER = "slider" + AUDIO_FILES = "audio_files" class AzureApiVersion(Enum): @@ -68,13 +69,6 @@ class TtsVoiceGender(Enum): FEMALE = "Female" -class OpenAiModel(Enum): - """https://platform.openai.com/docs/models/overview""" - - GPT_4O = "gpt-4o" - GPT_4O_MINI = "gpt-4o-mini" - - class MistralModel(Enum): """https://docs.mistral.ai/getting-started/models/""" @@ -85,6 +79,16 @@ class MistralModel(Enum): MISTRAL_MEDIUM = "mistral-medium-latest" MISTRAL_LARGE = "mistral-large-latest" +class PerplexityModel(Enum): + """https://docs.perplexity.ai/guides/model-cards""" + + SONAR_SMALL = "llama-3.1-sonar-small-128k-online" + SONAR_MEDIUM = "llama-3.1-sonar-large-128k-online" + SONAR_LARGE = "llama-3.1-sonar-huge-128k-online" + CHAT_SMALL = "llama-3.1-sonar-small-128k-chat" + CHAT_LARGE = "llama-3.1-sonar-large-128k-chat" + LLAMA3_8B = "llama-3.1-8b-instruct" + LLAMA3_70B = "llama-3.1-70b-instruct" class GoogleAiModel(Enum): GEMINI_1_5_FLASH = "gemini-1.5-flash" @@ -151,6 +155,8 @@ class ConversationProvider(Enum): AZURE = "azure" WINGMAN_PRO = "wingman_pro" GOOGLE = "google" + CEREBRAS = "cerebras" + PERPLEXITY = "perplexity" class ImageGenerationProvider(Enum): @@ -184,6 +190,7 @@ class SkillCategory(Enum): STAR_CITIZEN = "star_citizen" TRUCK_SIMULATOR = "truck_simulator" NO_MANS_SKY = "no_mans_sky" + FLIGHT_SIMULATOR = "flight_simulator" # Pydantic models for enums @@ -231,13 +238,11 @@ class TtsVoiceGenderEnumModel(BaseEnumModel): gender: TtsVoiceGender -class OpenAiModelEnumModel(BaseEnumModel): - model: OpenAiModel - - class MistralModelEnumModel(BaseEnumModel): model: MistralModel +class PerplexityModelEnumModel(BaseEnumModel): + model: PerplexityModel class GoogleAiModelEnumModel(BaseEnumModel): model: GoogleAiModel @@ -309,7 +314,6 @@ class SkillCategoryModel(BaseEnumModel): "AzureApiVersion": AzureApiVersionEnumModel, "AzureRegion": AzureRegionEnumModel, "TtsVoiceGender": TtsVoiceGenderEnumModel, - "OpenAiModel": OpenAiModelEnumModel, "MistralModel": MistralModelEnumModel, "GoogleAiModel": GoogleAiModelEnumModel, "WingmanProAzureDeployment": WingmanProAzureDeploymentEnumModel, @@ -324,6 +328,7 @@ class SkillCategoryModel(BaseEnumModel): "WingmanProSttProvider": WingmanProSttProviderModel, "WingmanProTtsProvider": WingmanProTtsProviderModel, "SkillCategory": SkillCategoryModel, + "PerplexityModel": PerplexityModelEnumModel, # Add new enums here as key-value pairs } diff --git a/api/interface.py b/api/interface.py index 28c49f7e..586c8668 100644 --- a/api/interface.py +++ b/api/interface.py @@ -11,7 +11,6 @@ CustomPropertyType, SkillCategory, TtsVoiceGender, - OpenAiModel, OpenAiTtsVoice, SoundEffect, SttProvider, @@ -22,6 +21,7 @@ WingmanProRegion, WingmanProSttProvider, WingmanProTtsProvider, + PerplexityModel, ) @@ -278,7 +278,7 @@ class XVASynthTtsConfig(BaseModel): class OpenAiConfig(BaseModel): - conversation_model: OpenAiModel + conversation_model: str """ The model to use for conversations aka "chit-chat" and for function calls. """ @@ -311,11 +311,21 @@ class MistralConfig(BaseModel): endpoint: str +class PerplexityConfig(BaseModel): + conversation_model: PerplexityModel + endpoint: str + + class GroqConfig(BaseModel): conversation_model: str endpoint: str +class CerebrasConfig(BaseModel): + conversation_model: str + endpoint: str + + class GoogleConfig(BaseModel): conversation_model: GoogleAiModel @@ -391,6 +401,25 @@ class FeaturesConfig(BaseModel): use_generic_instant_responses: bool +class AudioFile(BaseModel): + path: str + """The audio file to play. Required.""" + + name: str + """The name of the audio file.""" + + +class AudioFileConfig(BaseModel): + files: list[AudioFile] + """The audio file(s) to play. If there are multiple, a random file will be played.""" + + volume: float + """The volume to play the audio file at.""" + + wait: bool + """Whether to wait for the audio file to finish playing before continuing.""" + + class CommandKeyboardConfig(BaseModel): hotkey: str """The hotkey. Can be a single key like 'a' or a combination like 'ctrl+shift+a'.""" @@ -441,6 +470,9 @@ class CommandActionConfig(BaseModel): write: Optional[str] = None """The word or phrase to type, for example, to type text in a login screen. Must have associated button press to work. May need special formatting for special characters.""" + audio: Optional[AudioFileConfig] = None + """The audio file to play. Optional.""" + class CommandConfig(BaseModel): name: str @@ -507,7 +539,15 @@ class CustomProperty(BaseModel): """The name of the property. Has to be unique""" name: str """The "friendly" name of the property, displayed in the UI.""" - value: str | int | float | bool | VoiceSelection | list[VoiceSelection] + value: ( + str + | int + | float + | bool + | VoiceSelection + | list[VoiceSelection] + | AudioFileConfig + ) """The value of the property""" property_type: CustomPropertyType """Determines the type of the property and which controls to render in the UI.""" @@ -553,6 +593,7 @@ class NestedConfig(BaseModel): openai: OpenAiConfig mistral: MistralConfig groq: GroqConfig + cerebras: CerebrasConfig google: GoogleConfig openrouter: OpenRouterConfig local_llm: LocalLlmConfig @@ -562,6 +603,7 @@ class NestedConfig(BaseModel): xvasynth: XVASynthTtsConfig whispercpp: WhispercppSttConfig wingman_pro: WingmanProConfig + perplexity: PerplexityConfig commands: Optional[list[CommandConfig]] = None skills: Optional[list[SkillConfig]] = None diff --git a/assets/wingman-ui-1.png b/assets/wingman-ui-1.png index 597e83c5..049a1836 100644 Binary files a/assets/wingman-ui-1.png and b/assets/wingman-ui-1.png differ diff --git a/assets/wingman-ui-2.png b/assets/wingman-ui-2.png index 83ed6874..f5e2a859 100644 Binary files a/assets/wingman-ui-2.png and b/assets/wingman-ui-2.png differ diff --git a/assets/wingman-ui-3.png b/assets/wingman-ui-3.png index 2e651505..be5a69f9 100644 Binary files a/assets/wingman-ui-3.png and b/assets/wingman-ui-3.png differ diff --git a/assets/wingman-ui-4.png b/assets/wingman-ui-4.png index 11294c23..07189a97 100644 Binary files a/assets/wingman-ui-4.png and b/assets/wingman-ui-4.png differ diff --git a/assets/wingman-ui-5.png b/assets/wingman-ui-5.png new file mode 100644 index 00000000..f2ebd47e Binary files /dev/null and b/assets/wingman-ui-5.png differ diff --git a/assets/wingman-ui-6.png b/assets/wingman-ui-6.png new file mode 100644 index 00000000..ed93b958 Binary files /dev/null and b/assets/wingman-ui-6.png differ diff --git a/assets/wingman-ui-7.png b/assets/wingman-ui-7.png new file mode 100644 index 00000000..a4c4f898 Binary files /dev/null and b/assets/wingman-ui-7.png differ diff --git a/assets/wingman-ui-8.png b/assets/wingman-ui-8.png new file mode 100644 index 00000000..f248e918 Binary files /dev/null and b/assets/wingman-ui-8.png differ diff --git a/main.py b/main.py index 269b2144..6e7277dc 100644 --- a/main.py +++ b/main.py @@ -131,6 +131,29 @@ def custom_openapi(): # Ensure the components.schemas key exists openapi_schema.setdefault("components", {}).setdefault("schemas", {}) + # Add enums to schema + for enum_name, enum_model in ENUM_TYPES.items(): + enum_field_name, enum_type = next(iter(enum_model.__annotations__.items())) + if issubclass(enum_type, Enum): + enum_values = [e.value for e in enum_type] + enum_schema = { + "type": "string", + "enum": enum_values, + "description": f"Possible values for {enum_name}", + } + openapi_schema["components"]["schemas"][enum_name] = enum_schema + + openapi_schema["components"]["schemas"]["CommandActionConfig"] = { + "type": "object", + "properties": { + "keyboard": {"$ref": "#/components/schemas/CommandKeyboardConfig"}, + "wait": {"type": "number"}, + "mouse": {"$ref": "#/components/schemas/CommandMouseConfig"}, + "write": {"type": "string"}, + "audio": {"$ref": "#/components/schemas/AudioFileConfig"}, + }, + } + # Add WebSocket command models to schema for cls in WebSocketCommandModel.__subclasses__(): cls_schema_dict = cls.model_json_schema( @@ -156,18 +179,6 @@ def custom_openapi(): cls_schema_dict.setdefault("required", []).append(field_name) openapi_schema["components"]["schemas"][cls.__name__] = cls_schema_dict - # Add enums to schema - for enum_name, enum_model in ENUM_TYPES.items(): - enum_field_name, enum_type = next(iter(enum_model.__annotations__.items())) - if issubclass(enum_type, Enum): - enum_values = [e.value for e in enum_type] - enum_schema = { - "type": "string", - "enum": enum_values, - "description": f"Possible values for {enum_name}", - } - openapi_schema["components"]["schemas"][enum_name] = enum_schema - app.openapi_schema = openapi_schema return app.openapi_schema @@ -244,7 +255,7 @@ async def async_main(host: str, port: int, sidecar: bool): secret = input(f"Please enter your '{error.secret_name}' API key/secret: ") if secret: secret_keeper.secrets[error.secret_name] = secret - secret_keeper.save() + await secret_keeper.save() saved_secrets.append(error.secret_name) else: return diff --git a/providers/elevenlabs.py b/providers/elevenlabs.py index 5a6cfaf9..de10c77a 100644 --- a/providers/elevenlabs.py +++ b/providers/elevenlabs.py @@ -1,8 +1,6 @@ -from elevenlabslib import ( - User, - GenerationOptions, - PlaybackOptions, -) +import asyncio +from typing import Optional +from elevenlabslib import User, GenerationOptions, PlaybackOptions, SFXGenerationOptions from api.enums import SoundEffect, WingmanInitializationErrorType from api.interface import ElevenlabsConfig, SoundConfig, WingmanInitializationError from services.audio_player import AudioPlayer @@ -55,12 +53,12 @@ def notify_playback_finished(): contains_high_end_radio = SoundEffect.HIGH_END_RADIO in sound_config.effects if contains_high_end_radio: - audio_player.play_wav("Radio_Static_Beep.wav", sound_config.volume) + audio_player.play_wav_sample("Radio_Static_Beep.wav", sound_config.volume) if sound_config.play_beep: - audio_player.play_wav("beep.wav", sound_config.volume) + audio_player.play_wav_sample("beep.wav", sound_config.volume) elif sound_config.play_beep_apollo: - audio_player.play_wav("Apollo_Beep.wav", sound_config.volume) + audio_player.play_wav_sample("Apollo_Beep.wav", sound_config.volume) WebSocketUser.ensure_async( audio_player.notify_playback_finished(wingman_name) @@ -68,13 +66,13 @@ def notify_playback_finished(): def notify_playback_started(): if sound_config.play_beep: - audio_player.play_wav("beep.wav", sound_config.volume) + audio_player.play_wav_sample("beep.wav", sound_config.volume) elif sound_config.play_beep_apollo: - audio_player.play_wav("Apollo_Beep.wav", sound_config.volume) + audio_player.play_wav_sample("Apollo_Beep.wav", sound_config.volume) contains_high_end_radio = SoundEffect.HIGH_END_RADIO in sound_config.effects if contains_high_end_radio: - audio_player.play_wav("Radio_Static_Beep.wav", sound_config.volume) + audio_player.play_wav_sample("Radio_Static_Beep.wav", sound_config.volume) WebSocketUser.ensure_async( audio_player.notify_playback_started(wingman_name) @@ -142,6 +140,32 @@ def playback_finished(wingman_name): audio_player.playback_events.subscribe("finished", playback_finished) + async def generate_sound_effect( + self, + prompt: str, + duration_seconds: Optional[float] = None, + prompt_influence: Optional[float] = None, + ): + user = User(self.api_key) + options = SFXGenerationOptions( + duration_seconds=duration_seconds, prompt_influence=prompt_influence + ) + req, _ = user.generate_sfx(prompt, options) + + result_ready = asyncio.Event() + audio: bytes = None + + def get_result(future: asyncio.Future[bytes]): + nonlocal audio + audio = future.result() + result_ready.set() # Signal that the result is ready + + req.add_done_callback(get_result) + + # Wait for the result to be ready + await result_ready.wait() + return audio + def get_available_voices(self): user = User(self.api_key) return user.get_available_voices() @@ -149,3 +173,7 @@ def get_available_voices(self): def get_available_models(self): user = User(self.api_key) return user.get_models() + + def get_subscription_data(self): + user = User(self.api_key) + return user.get_subscription_data() diff --git a/requirements.txt b/requirements.txt index 913e655e..adca6269 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,6 +1,6 @@ azure-cognitiveservices-speech==1.38.0 edge-tts==6.1.12 -elevenlabslib==0.22.5 +elevenlabslib==0.22.6 fastapi==0.111.0 google-generativeai==0.7.0 markdown==3.6 diff --git a/services/audio_library.py b/services/audio_library.py new file mode 100644 index 00000000..79159d00 --- /dev/null +++ b/services/audio_library.py @@ -0,0 +1,231 @@ +import asyncio +import os +import threading +import time +from os import path +from random import randint +from api.interface import AudioFile, AudioFileConfig +from services.printr import Printr +from services.audio_player import AudioPlayer +from services.file import get_writable_dir + +printr = Printr() +DIR_AUDIO_LIBRARY = "audio_library" + +class AudioLibrary: + def __init__( + self, + callback_playback_started: callable = None, + callback_playback_finished: callable = None, + ): + # Configurable settings + self.callback_playback_started = callback_playback_started # Parameters: AudioFileConfig, AudioPlayer, volume(float) + self.callback_playback_finished = ( + callback_playback_finished # Parameters: AudioFileConfig + ) + + # Internal settings + self.audio_library_path = get_writable_dir(DIR_AUDIO_LIBRARY) + self.current_playbacks = {} + + ########################## + ### Playback functions ### + ########################## + + async def start_playback( + self, audio_file: AudioFile | AudioFileConfig, volume_modifier: float = 1.0 + ): + audio_file = self.__get_audio_file_config(audio_file) + audio_player = AudioPlayer( + asyncio.get_event_loop(), self.on_playback_started, self.on_playback_finish + ) + + selected_file = self.__get_random_audio_file_from_config(audio_file) + playback_key = self.__get_playback_key(selected_file) + + # skip if file does not exist + if not path.exists(path.join(self.audio_library_path, selected_file.path, selected_file.name)): + printr.toast_error( + f"Skipping playback of {selected_file.name} as it does not exist in the audio library." + ) + return + + # stop running playbacks of configured files + await self.stop_playback(audio_file, 0.1) + + async def actual_start_playback( + audio_file: AudioFile, + audio_player: AudioPlayer, + volume: list, + ): + full_path = path.join( + self.audio_library_path, audio_file.path, audio_file.name + ) + await audio_player.play_audio_file( + filename=full_path, + volume=volume, + wingman_name=playback_key, + publish_event=False, + ) + + volume = [(audio_file.volume or 1.0) * volume_modifier] + self.current_playbacks[playback_key] = [ + audio_player, + volume, + selected_file, + ] + self.__threaded_execution( + actual_start_playback, selected_file, audio_player, volume + ) + if audio_file.wait: + while True: + time.sleep(0.1) + status = self.get_playback_status(selected_file) + if not status[1]: # no audio player + break + + async def stop_playback( + self, + audio_file: AudioFile | AudioFileConfig, + fade_out_time: float = 0.5, + fade_out_resolution: int = 20, + ): + async def fade_out( + audio_player: AudioPlayer, + volume: list[float], + fade_out_time: int, + fade_out_resolution: int, + ): + original_volume = volume[0] + step_size = original_volume / fade_out_resolution + step_duration = fade_out_time / fade_out_resolution + while audio_player.is_playing and volume[0] > 0.0001: + volume[0] -= step_size + await asyncio.sleep(step_duration) + await asyncio.sleep(0.05) # 50ms grace period + await audio_player.stop_playback() + + for file in self.__get_audio_file_config(audio_file).files: + status = self.get_playback_status(file) + audio_player = status[1] + volume = status[2] + if audio_player: + if fade_out_time > 0: + self.__threaded_execution( + fade_out, + audio_player, + volume, + fade_out_time, + fade_out_resolution, + ) + else: + await audio_player.stop_playback() + + self.current_playbacks.pop(self.__get_playback_key(file), None) + + def get_playback_status( + self, audio_file: AudioFile + ) -> list[bool, AudioPlayer | None, list[float] | None]: + playback_key = self.__get_playback_key(audio_file) + + if playback_key in self.current_playbacks: + audio_player = self.current_playbacks[playback_key][0] + return [ + audio_player.is_playing, # Is playing + audio_player, # AudioPlayer + self.current_playbacks[playback_key][1], # Current Volume list + ] + return [False, None, None] + + async def change_playback_volume( + self, audio_file: AudioFile | AudioFileConfig, volume: float + ): + audio_file = self.__get_audio_file_config(audio_file) + playback_keys = [ + self.__get_playback_key(current_file) + for current_file in audio_file.files + ] + + for playback_key in playback_keys: + if playback_key in self.current_playbacks: + self.current_playbacks[playback_key][1][0] = volume + + async def on_playback_started(self, file_path: str): + self.notify_playback_started(file_path) + # Placeholder for future implementations + + async def on_playback_finish(self, file_path: str): + self.notify_playback_finished(file_path) + self.current_playbacks.pop(file_path, None) + + def notify_playback_started(self, file_path: str): + if self.callback_playback_started: + # Give the callback the audio file that started playing and current volume + audio_file = self.current_playbacks[file_path][2] + audio_player = self.current_playbacks[file_path][0] + volume = self.current_playbacks[file_path][1][0] + self.callback_playback_started(audio_file, audio_player, volume) + + def notify_playback_finished(self, file_path: str): + if self.callback_playback_finished: + # Give the callback the audio file that finished playing + audio_file = self.current_playbacks[file_path][2] + self.callback_playback_finished(audio_file) + + ############################### + ### Audio Library functions ### + ############################### + + def get_audio_files(self) -> list[AudioFile]: + audio_files = [] + try: + for root, _, files in os.walk(self.audio_library_path): + for file in files: + if file.endswith((".wav", ".mp3")): + rel_path = path.relpath(root, self.audio_library_path) + rel_path = "" if rel_path == "." else rel_path + audio_files.append(AudioFile(path=rel_path, name=file)) + except Exception: + pass + return audio_files + + ######################## + ### Helper functions ### + ######################## + + def __threaded_execution(self, function, *args) -> threading.Thread: + """Execute a function in a separate thread.""" + + def start_thread(function, *args): + if asyncio.iscoroutinefunction(function): + new_loop = asyncio.new_event_loop() + asyncio.set_event_loop(new_loop) + new_loop.run_until_complete(function(*args)) + new_loop.close() + else: + function(*args) + + thread = threading.Thread(target=start_thread, args=(function, *args)) + thread.start() + return thread + + def __get_audio_file_config( + self, audio_file: AudioFile | AudioFileConfig + ) -> AudioFileConfig: + if isinstance(audio_file, AudioFile): + return AudioFileConfig(files=[audio_file], volume=1, wait=False) + return audio_file + + def __get_random_audio_file_from_config( + self, audio_file: AudioFileConfig + ) -> AudioFile: + size = len(audio_file.files) + if size > 1: + index = randint(0, size - 1) + else: + index = 0 + + return audio_file.files[index] + + def __get_playback_key(self, audio_file: AudioFile) -> str: + return path.join(audio_file.path, audio_file.name) \ No newline at end of file diff --git a/services/audio_player.py b/services/audio_player.py index e1634cb5..799eba67 100644 --- a/services/audio_player.py +++ b/services/audio_player.py @@ -1,5 +1,6 @@ import asyncio import io +import wave from os import path from threading import Thread from typing import Callable @@ -16,7 +17,6 @@ get_sound_effects, ) - class AudioPlayer: def __init__( self, @@ -42,14 +42,21 @@ def set_event_loop(self, loop: asyncio.AbstractEventLoop): self.event_loop = loop def start_playback( - self, audio, sample_rate, channels, finished_callback, volume: float + self, + audio, + sample_rate, + channels, + finished_callback, + volume: list[float] | float, ): def callback(outdata, frames, time, status): + # this is a super hacky way to update volume while the playback is running + local_volume = volume[0] if isinstance(volume, list) else volume nonlocal playhead chunksize = frames * channels # If we are at the end of the audio buffer, stop playback - if playhead * channels >= len(audio): + if playhead >= len(audio): if np.issubdtype(outdata.dtype, np.floating): outdata.fill(0.0) # Fill with zero for floats else: @@ -66,10 +73,10 @@ def callback(outdata, frames, time, status): if channels > 1 and current_chunk.ndim == 1: current_chunk = np.tile(current_chunk[:, np.newaxis], (1, channels)) - # Flat the chunk + # Flatten the chunk current_chunk = current_chunk.ravel() - required_length = frames * channels + required_length = chunksize # Ensure current_chunk has the required length if len(current_chunk) < required_length: @@ -81,7 +88,7 @@ def callback(outdata, frames, time, status): # Reshape current_chunk to match outdata's shape, only if size matches try: current_chunk = current_chunk.reshape((frames, channels)) - current_chunk = current_chunk * volume + current_chunk = current_chunk * local_volume if np.issubdtype(outdata.dtype, np.floating): outdata[:] = current_chunk.astype(outdata.dtype) else: @@ -95,7 +102,7 @@ def callback(outdata, frames, time, status): ) # Safely fill zero to avoid noise # Update playhead - playhead = end + playhead += frames # Check if playback should stop (end of audio) if playhead >= len(audio): @@ -107,6 +114,7 @@ def callback(outdata, frames, time, status): # Initial playhead position playhead = 0 + self.is_playing = True # Create and start the audio stream self.stream = sd.OutputStream( @@ -195,21 +203,49 @@ def finished_callback(): await self.notify_playback_started(wingman_name) - async def notify_playback_started(self, wingman_name: str): - await self.playback_events.publish("started", wingman_name) + async def notify_playback_started( + self, wingman_name: str, publish_event: bool = True + ): + if publish_event: + await self.playback_events.publish("started", wingman_name) if callable(self.on_playback_started): await self.on_playback_started(wingman_name) - async def notify_playback_finished(self, wingman_name: str): - await self.playback_events.publish("finished", wingman_name) + async def notify_playback_finished( + self, wingman_name: str, publish_event: bool = True + ): + if publish_event: + await self.playback_events.publish("finished", wingman_name) if callable(self.on_playback_finished): await self.on_playback_finished(wingman_name) - def play_wav(self, audio_sample_file: str, volume: float): - beep_audio, beep_sample_rate = self.get_audio_from_file( - path.join(self.sample_dir, audio_sample_file) - ) - self.start_playback(beep_audio, beep_sample_rate, 1, None, volume) + def play_wav_sample(self, audio_sample_file: str, volume: float): + file_path = path.join(self.sample_dir, audio_sample_file) + self.play_wav(file_path, volume) + + def play_wav(self, audio_file: str, volume: list[float] | float): + audio, sample_rate = self.get_audio_from_file(audio_file) + with wave.open(audio_file, "rb") as audio_file: + num_channels = audio_file.getnchannels() + self.start_playback(audio, sample_rate, num_channels, None, volume) + + def play_mp3(self, audio_sample_file: str, volume: list[float] | float): + audio, sample_rate = self.get_audio_from_file(audio_sample_file) + self.start_playback(audio, sample_rate, 2, None, volume) + + async def play_audio_file( + self, + filename: str, + volume: list[float] | float, + wingman_name: str = None, + publish_event: bool = True, + ): + await self.notify_playback_started(wingman_name, publish_event) + if filename.endswith(".mp3"): + self.play_mp3(filename, volume) + elif filename.endswith(".wav"): + self.play_wav(filename, volume) + await self.notify_playback_finished(wingman_name, publish_event) def get_audio_from_file(self, filename: str) -> tuple: audio, sample_rate = sf.read(filename, dtype="float32") @@ -345,9 +381,9 @@ def get_mixed_chunk(length): if num_samples_to_copy > remaining: num_samples_to_copy = remaining - chunk[ - length - remaining : length - remaining + num_samples_to_copy - ] = noise_audio[mixed_pos:(mixed_pos + num_samples_to_copy)] + chunk[length - remaining : length - remaining + num_samples_to_copy] = ( + noise_audio[mixed_pos : (mixed_pos + num_samples_to_copy)] + ) remaining -= num_samples_to_copy mixed_pos = mixed_pos + num_samples_to_copy return chunk @@ -404,13 +440,13 @@ def callback(outdata, frames, time, status): await self.notify_playback_started(wingman_name) if config.play_beep: - self.play_wav("beep.wav", config.volume) + self.play_wav_sample("beep.wav", config.volume) elif config.play_beep_apollo: - self.play_wav("Apollo_Beep.wav", config.volume) + self.play_wav_sample("Apollo_Beep.wav", config.volume) contains_high_end_radio = SoundEffect.HIGH_END_RADIO in config.effects if contains_high_end_radio: - self.play_wav("Radio_Static_Beep.wav", config.volume) + self.play_wav_sample("Radio_Static_Beep.wav", config.volume) self.raw_stream.start() @@ -447,12 +483,12 @@ def callback(outdata, frames, time, status): contains_high_end_radio = SoundEffect.HIGH_END_RADIO in config.effects if contains_high_end_radio: - self.play_wav("Radio_Static_Beep.wav", config.volume) + self.play_wav_sample("Radio_Static_Beep.wav", config.volume) if config.play_beep: - self.play_wav("beep.wav", config.volume) + self.play_wav_sample("beep.wav", config.volume) elif config.play_beep_apollo: - self.play_wav("Apollo_Beep.wav", config.volume) + self.play_wav_sample("Apollo_Beep.wav", config.volume) self.is_playing = False await self.notify_playback_finished(wingman_name) diff --git a/services/command_handler.py b/services/command_handler.py index fb0aa941..52c25f5e 100644 --- a/services/command_handler.py +++ b/services/command_handler.py @@ -43,10 +43,8 @@ async def dispatch(self, message, websocket: WebSocket): RecordKeyboardActionsCommand(**command), websocket ) elif command_name == "stop_recording": - # Get Enum from string - recording_type = KeyboardRecordingType(command["recording_type"]) await self.handle_stop_recording( - StopRecordingCommand(**command), websocket, recording_type + StopRecordingCommand(**command), websocket ) else: raise ValueError("Unknown command") @@ -66,7 +64,7 @@ async def handle_secret(self, command: SaveSecretCommand, websocket: WebSocket): secret_name = command.secret_name secret_value = command.secret_value self.secret_keeper.secrets[secret_name] = secret_value - self.secret_keeper.save() + await self.secret_keeper.save() if command.show_message: await self.printr.print_async( @@ -110,7 +108,7 @@ async def handle_record_keyboard_actions( def _on_key_event(event): if event.event_type == "down" and event.name == "esc": WebSocketUser.ensure_async( - self.handle_stop_recording(None, None, command.recording_type) + self.handle_stop_recording(command, websocket) ) if ( event.scan_code == 58 @@ -129,7 +127,7 @@ def _on_key_event(event): and self._is_hotkey_recording_finished(self.recorded_keys) ): WebSocketUser.ensure_async( - self.handle_stop_recording(None, None, command.recording_type) + self.handle_stop_recording(command, websocket) ) self.hook_callback = keyboard.hook(_on_key_event, suppress=True) @@ -138,7 +136,6 @@ async def handle_stop_recording( self, command: StopRecordingCommand, websocket: WebSocket, - recording_type: KeyboardRecordingType = KeyboardRecordingType.SINGLE, ): if self.hook_callback: keyboard.unhook(self.hook_callback) @@ -148,7 +145,7 @@ async def handle_stop_recording( actions = ( self._get_actions_from_recorded_keys(recorded_keys) - if recording_type == KeyboardRecordingType.MACRO_ADVANCED + if command.recording_type == KeyboardRecordingType.MACRO_ADVANCED else self._get_actions_from_recorded_hotkey(recorded_keys) ) command = ActionsRecordedCommand(command="actions_recorded", actions=actions) diff --git a/services/config_manager.py b/services/config_manager.py index ac89d3a8..c4eb6401 100644 --- a/services/config_manager.py +++ b/services/config_manager.py @@ -942,6 +942,7 @@ def merge_configs(self, default: Config, wingman): "openai", "mistral", "groq", + "cerebras", "google", "openrouter", "local_llm", @@ -951,6 +952,7 @@ def merge_configs(self, default: Config, wingman): "whispercpp", "xvasynth", "wingman_pro", + "perplexity", ]: if key in default: # Use copy.deepcopy to ensure a full deep copy is made and original is untouched. diff --git a/services/config_migration_service.py b/services/config_migration_service.py index 5b4040f9..fd2e99d4 100644 --- a/services/config_migration_service.py +++ b/services/config_migration_service.py @@ -15,41 +15,141 @@ from services.file import get_users_dir from services.printr import Printr from services.secret_keeper import SecretKeeper +from services.system_manager import SystemManager MIGRATION_LOG = ".migration" class ConfigMigrationService: - def __init__(self, config_manager: ConfigManager): + def __init__(self, config_manager: ConfigManager, system_manager: SystemManager): self.config_manager = config_manager + self.system_manager = system_manager self.printr = Printr() self.log_message: str = datetime.now().strftime("%Y-%m-%d-%H-%M-%S") + "\n" + self.users_dir = get_users_dir() + self.latest_version = MIGRATIONS[-1][1] + self.latest_config_path = path.join( + self.users_dir, self.latest_version, CONFIGS_DIR + ) + + def migrate_to_latest(self): + + # Find the earliest existing version that needs migration + earliest_version = self.find_earliest_existing_version(self.users_dir) + + if not earliest_version: + self.log("No valid version directories found for migration.", True) + return + + # Check if the latest version is already migrated + + migration_file = path.join(self.latest_config_path, MIGRATION_LOG) + + if path.exists(migration_file): + self.log( + f"Found {self.latest_version} configs. No migrations needed.", True + ) + return + + self.log( + f"Starting migration from version {earliest_version.replace('_', '.')} to {self.latest_version.replace('_', '.')}", + True, + ) + + # Perform migrations + current_version = earliest_version + while current_version != self.latest_version: + next_version = self.find_next_version(current_version) + self.perform_migration(current_version, next_version) + current_version = next_version + + self.log( + f"Migration completed successfully. Current version: {self.latest_version.replace('_', '.')}", + True, + ) + + def find_earliest_existing_version(self, users_dir): + versions = self.get_valid_versions(users_dir) + versions.sort(key=lambda v: [int(n) for n in v.split("_")]) + + for version in versions: + if version != self.latest_version: + return version + + return None + + def find_next_version(self, current_version): + for old, new, _ in MIGRATIONS: + if old == current_version: + return new + return None + + def perform_migration(self, old_version, new_version): + migration_func = next( + (m[2] for m in MIGRATIONS if m[0] == old_version and m[1] == new_version), + None, + ) + + if migration_func: + self.log( + f"Migrating from {old_version.replace('_', '.')} to {new_version.replace('_', '.')}", + True, + ) + migration_func(self) + else: + self.err(f"No migration path found from {old_version} to {new_version}") + raise ValueError( + f"No migration path found from {old_version} to {new_version}" + ) + + def find_previous_version(self, users_dir, current_version): + versions = self.get_valid_versions(users_dir) + versions.sort(key=lambda v: [int(n) for n in v.split("_")]) + index = versions.index(current_version) + return versions[index - 1] if index > 0 else None + + def get_valid_versions(self, users_dir): + versions = next(os.walk(users_dir))[1] + return [v for v in versions if self.is_valid_version(v)] + + def find_latest_user_version(self, users_dir): + valid_versions = self.get_valid_versions(users_dir) + return max( + valid_versions, + default=None, + key=lambda v: [int(n) for n in v.split("_")], + ) + + def is_valid_version(self, version): + return any(version in migration[:2] for migration in MIGRATIONS) # MIGRATIONS def migrate_140_to_150(self): - def migrate_settings(old: dict, new: SettingsConfig) -> dict: + def migrate_settings(old: dict, new: dict) -> dict: old["voice_activation"]["whispercpp_config"] = { - "temperature": new.voice_activation.whispercpp_config.temperature + "temperature": new["voice_activation"]["whispercpp_config"][ + "temperature" + ] } - old["voice_activation"]["whispercpp"] = self.config_manager.convert_to_dict( - new.voice_activation.whispercpp - ) + old["voice_activation"]["whispercpp"] = new["voice_activation"][ + "whispercpp" + ] self.log("- applied new split whispercpp settings/config structure") - old["xvasynth"] = self.config_manager.convert_to_dict(new.xvasynth) + old["xvasynth"] = new["xvasynth"] self.log("- adding new XVASynth settings") old.pop("audio", None) self.log("- removed audio device settings because DirectSound was removed") return old - def migrate_defaults(old: dict, new: NestedConfig) -> dict: + def migrate_defaults(old: dict, new: dict) -> dict: # add new properties - old["sound"]["volume"] = new.sound.volume + old["sound"]["volume"] = new["sound"]["volume"] if old["sound"].get("play_beep_apollo", None) is None: - old["sound"]["play_beep_apollo"] = new.sound.play_beep_apollo - old["google"] = self.config_manager.convert_to_dict(new.google) + old["sound"]["play_beep_apollo"] = new["sound"]["play_beep_apollo"] + old["google"] = new["google"] self.log("- added new properties: sound.volume, google") # remove obsolete properties @@ -65,24 +165,24 @@ def migrate_defaults(old: dict, new: NestedConfig) -> dict: ) # rest of whispercpp moved to settings.yaml - old["whispercpp"] = {"temperature": new.whispercpp.temperature} + old["whispercpp"] = {"temperature": new["whispercpp"]["temperature"]} self.log("- cleaned up whispercpp properties") # xvasynth was restructured - old["xvasynth"] = self.config_manager.convert_to_dict(new.xvasynth) + old["xvasynth"] = new["xvasynth"] self.log("- resetting and restructuring XVASynth") # patching new default values - old["features"]["stt_provider"] = new.features.stt_provider.value + old["features"]["stt_provider"] = new["features"]["stt_provider"] self.log("- set whispercpp as new default STT provider") - old["openai"]["conversation_model"] = new.openai.conversation_model.value - old["azure"]["conversation"][ - "deployment_name" - ] = new.azure.conversation.deployment_name - old["wingman_pro"][ + old["openai"]["conversation_model"] = new["openai"]["conversation_model"] + old["azure"]["conversation"]["deployment_name"] = new["azure"][ + "conversation" + ]["deployment_name"] + old["wingman_pro"]["conversation_deployment"] = new["wingman_pro"][ "conversation_deployment" - ] = new.wingman_pro.conversation_deployment.value + ] self.log("- set gpt-4o-mini as new default LLM model") return old @@ -186,6 +286,30 @@ def find_skill(skills, module_name): migrate_wingman=migrate_wingman, ) + def migrate_150_to_160(self): + def migrate_settings(old: dict, new: dict) -> dict: + return old + + def migrate_defaults(old: dict, new: dict) -> dict: + # add new properties + old["cerebras"] = new["cerebras"] + old["perplexity"] = new["perplexity"] + + self.log("- added new properties: cerebras, perplexity") + + return old + + def migrate_wingman(old: dict, new: Optional[dict]) -> dict: + return old + + self.migrate( + old_version="1_5_0", + new_version="1_6_0", + migrate_settings=migrate_settings, + migrate_defaults=migrate_defaults, + migrate_wingman=migrate_wingman, + ) + # INTERNAL def log(self, message: str, highlight: bool = False): @@ -217,14 +341,23 @@ def migrate( self, old_version: str, new_version: str, - migrate_settings: Callable[[dict, SettingsConfig], dict], - migrate_defaults: Callable[[dict, NestedConfig], dict], + migrate_settings: Callable[[dict, dict], dict], + migrate_defaults: Callable[[dict, dict], dict], migrate_wingman: Callable[[dict, Optional[dict]], dict], ) -> None: users_dir = get_users_dir() old_config_path = path.join(users_dir, old_version, CONFIGS_DIR) new_config_path = path.join(users_dir, new_version, CONFIGS_DIR) + if not path.exists(path.join(users_dir, new_version)): + shutil.copytree( + path.join(users_dir, self.latest_version, "migration", new_version), + path.join(users_dir, new_version), + ) + self.log( + f"{new_version} configs not found during multi-step migration. Copied migration templates." + ) + already_migrated = path.exists(path.join(new_config_path, MIGRATION_LOG)) if already_migrated: self.log( @@ -242,24 +375,27 @@ def migrate( old_file = path.join(root, filename) new_file = old_file.replace(old_config_path, new_config_path) - if filename == ".DS_Store": + if filename == ".DS_Store" or filename == MIGRATION_LOG: continue # secrets if filename == "secrets.yaml": self.copy_file(old_file, new_file) - secret_keeper = SecretKeeper() - secret_keeper.secrets = secret_keeper.load() + + if new_config_path == self.latest_config_path: + secret_keeper = SecretKeeper() + secret_keeper.secrets = secret_keeper.load() # settings elif filename == "settings.yaml": self.log("Migrating settings.yaml...", True) migrated_settings = migrate_settings( old=self.config_manager.read_config(old_file), - new=self.config_manager.settings_config, + new=self.config_manager.read_config(new_file), ) try: - self.config_manager.settings_config = SettingsConfig( - **migrated_settings - ) + if new_config_path == self.latest_config_path: + self.config_manager.settings_config = SettingsConfig( + **migrated_settings + ) self.config_manager.save_settings_config() except ValidationError as e: self.err(f"Unable to migrate settings.yaml:\n{str(e)}") @@ -268,7 +404,7 @@ def migrate( self.log("Migrating defaults.yaml...", True) migrated_defaults = migrate_defaults( old=self.config_manager.read_config(old_file), - new=self.config_manager.default_config, + new=self.config_manager.read_config(new_file), ) try: self.config_manager.default_config = NestedConfig( @@ -292,9 +428,10 @@ def migrate( ), ) # validate the merged config - _wingman_config = self.config_manager.merge_configs( - default_config, migrated_wingman - ) + if new_config_path == self.latest_config_path: + _wingman_config = self.config_manager.merge_configs( + default_config, migrated_wingman + ) # diff it wingman_diff = self.config_manager.deep_diff( default_config, migrated_wingman @@ -357,3 +494,10 @@ def migrate( path.join(new_config_path, MIGRATION_LOG), "w", encoding="UTF-8" ) as stream: stream.write(self.log_message) + + +MIGRATIONS = [ + ("1_4_0", "1_5_0", ConfigMigrationService.migrate_140_to_150), + ("1_5_0", "1_6_0", ConfigMigrationService.migrate_150_to_160), + # Add new migrations here in order +] diff --git a/services/config_service.py b/services/config_service.py index 1a1eea88..a02ee49c 100644 --- a/services/config_service.py +++ b/services/config_service.py @@ -502,35 +502,7 @@ async def save_defaults_config( ) async def migrate_configs(self, system_manager: SystemManager): - current_version: str = system_manager.get_local_version() - current_version_str = str(current_version).replace(".", "_") - version_path = get_users_dir() - - def version_key(version_str): - try: - return Version(version_str.replace("_", ".")) - except InvalidVersion: - return Version("0.0.0") # Placeholder for invalid versions - - valid_versions = [ - v for v in os.listdir(version_path) if version_key(v) > Version("0.0.0") - ] - all_versions = sorted(valid_versions, key=version_key) - - if current_version_str not in all_versions: - raise ValueError( - f"Current version {current_version} not found in the directory." - ) - - current_index = all_versions.index(current_version_str) - - if current_index == 0: - return # Current version is the oldest, no migrations needed - - previous_version_str = all_versions[current_index - 1] - migration_func_name = f"migrate_{previous_version_str.replace('_', '')}_to_{current_version_str.replace('_', '')}" - - migraton_service = ConfigMigrationService(config_manager=self.config_manager) - if hasattr(migraton_service, migration_func_name): - migration_func = getattr(migraton_service, migration_func_name) - migration_func() + migration_service = ConfigMigrationService( + config_manager=self.config_manager, system_manager=system_manager + ) + migration_service.migrate_to_latest() diff --git a/services/module_manager.py b/services/module_manager.py index e080b65e..bbf32fa9 100644 --- a/services/module_manager.py +++ b/services/module_manager.py @@ -179,7 +179,6 @@ def read_available_skills() -> list[SkillBase]: skill = SkillBase( name=skill_config["name"], config=skill_config, - description=skill_config["description"], logo=logo, ) skills.append(skill) diff --git a/services/pub_sub.py b/services/pub_sub.py index 0326069a..51e18cb4 100644 --- a/services/pub_sub.py +++ b/services/pub_sub.py @@ -1,4 +1,5 @@ import asyncio +import inspect class PubSub: @@ -14,10 +15,26 @@ def unsubscribe(self, event_type, fn): if event_type in self.subscribers: self.subscribers[event_type].remove(fn) - async def publish(self, event_type, data): + async def publish(self, event_type, data=None): if event_type in self.subscribers: for fn in self.subscribers[event_type]: + # Get the number of parameters the function expects + params = inspect.signature(fn).parameters + param_count = len(params) + + # Determine if the function is a method (has 'self' parameter) + is_method = "self" in params + + # Determine if the function expects an argument (excluding 'self' for methods) + expects_arg = (param_count > 1) if is_method else (param_count > 0) + if asyncio.iscoroutinefunction(fn): - await fn(data) + if expects_arg and data is not None: + await fn(data) + else: + await fn() else: - fn(data) + if expects_arg and data is not None: + fn(data) + else: + fn() diff --git a/services/secret_keeper.py b/services/secret_keeper.py index 593f98e2..da60a34b 100644 --- a/services/secret_keeper.py +++ b/services/secret_keeper.py @@ -7,6 +7,7 @@ from services.file import get_writable_dir from services.websocket_user import WebSocketUser from services.printr import Printr +from services.pub_sub import PubSub class SecretKeeper(WebSocketUser): @@ -19,6 +20,7 @@ class SecretKeeper(WebSocketUser): config_file: str secrets: Dict[str, Any] prompted_secrets: list[str] = [] + secret_events: PubSub def __new__(cls): if cls._instance is None: @@ -44,6 +46,7 @@ def __new__(cls): get_writable_dir(CONFIGS_DIR), SECRETS_FILE ) cls._instance.secrets = cls._instance.load() or {} + cls._instance.secret_events = PubSub() return cls._instance @@ -60,7 +63,7 @@ def load(self) -> Dict[str, Any]: self.printr.toast_error(f"Could not load ({SECRETS_FILE})\n{str(e)}") return {} - def save(self) -> bool: + async def save(self) -> bool: if not self.config_file: self.printr.toast_error("No config file path provided.") return False @@ -68,6 +71,7 @@ def save(self) -> bool: with open(self.config_file, "w", encoding="UTF-8") as stream: yaml.dump(self.secrets, stream) self.load() + await self.secret_events.publish("secrets_saved", self.secrets) return True except yaml.YAMLError as e: self.printr.toast_error(f"Could not write ({SECRETS_FILE})\n{str(e)}") @@ -105,5 +109,5 @@ async def post_secrets(self, secrets: dict[str, Any]): for key, value in secrets.items(): self.secrets[key] = value - if self.save(): + if await self.save(): self.printr.print("Secrets updated.", server_only=True) diff --git a/services/system_manager.py b/services/system_manager.py index c47e608e..cd10e321 100644 --- a/services/system_manager.py +++ b/services/system_manager.py @@ -4,7 +4,7 @@ from packaging import version from api.interface import SystemCore, SystemInfo -LOCAL_VERSION = "1.5.0" +LOCAL_VERSION = "1.6.0" VERSION_ENDPOINT = "https://wingman-ai.com/api/version" diff --git a/services/tower.py b/services/tower.py index 3372df29..6c6d1a28 100644 --- a/services/tower.py +++ b/services/tower.py @@ -8,6 +8,7 @@ from providers.whispercpp import Whispercpp from providers.xvasynth import XVASynth from services.audio_player import AudioPlayer +from services.audio_library import AudioLibrary from services.module_manager import ModuleManager from services.printr import Printr from wingmen.open_ai_wingman import OpenAiWingman @@ -22,10 +23,12 @@ def __init__( self, config: Config, audio_player: AudioPlayer, + audio_library: AudioLibrary, whispercpp: Whispercpp, xvasynth: XVASynth, ): self.audio_player = audio_player + self.audio_library = audio_library self.config = config self.mouse_wingman_dict: dict[str, Wingman] = {} self.wingmen: list[Wingman] = [] @@ -84,6 +87,7 @@ async def __instantiate_wingman( config=wingman_config, settings=settings, audio_player=self.audio_player, + audio_library=self.audio_library, whispercpp=self.whispercpp, xvasynth=self.xvasynth, ) @@ -93,6 +97,7 @@ async def __instantiate_wingman( config=wingman_config, settings=settings, audio_player=self.audio_player, + audio_library=self.audio_library, whispercpp=self.whispercpp, xvasynth=self.xvasynth, ) diff --git a/services/voice_service.py b/services/voice_service.py index e11df008..68b1726e 100644 --- a/services/voice_service.py +++ b/services/voice_service.py @@ -1,3 +1,5 @@ +import asyncio +from concurrent.futures import ThreadPoolExecutor from fastapi import APIRouter from api.enums import AzureRegion, OpenAiTtsVoice from api.interface import ( @@ -15,6 +17,7 @@ from providers.xvasynth import XVASynth from services.audio_player import AudioPlayer from services.config_manager import ConfigManager +from services.printr import Printr class VoiceService: @@ -24,6 +27,7 @@ def __init__( audio_player: AudioPlayer, xvasynth: XVASynth, ): + self.printr = Printr() self.config_manager = config_manager self.audio_player = audio_player self.xvasynth = xvasynth @@ -114,13 +118,22 @@ def __convert_azure_voice(self, voice): ) # GET /voices/elevenlabs - def get_elevenlabs_voices(self, api_key: str): + async def get_elevenlabs_voices(self, api_key: str) -> list[VoiceInfo]: elevenlabs = ElevenLabs(api_key=api_key, wingman_name="") - voices = elevenlabs.get_available_voices() - convert = lambda voice: VoiceInfo(id=voice.voiceID, name=voice.name) - result = [convert(voice) for voice in voices] + try: + # Run the synchronous method in a separate thread + loop = asyncio.get_running_loop() + with ThreadPoolExecutor() as pool: + voices = await loop.run_in_executor( + pool, elevenlabs.get_available_voices + ) - return result + convert = lambda voice: VoiceInfo(id=voice.voiceID, name=voice.name) + result = [convert(voice) for voice in voices] + return result + except ValueError as e: + self.printr.toast_error(f"Elevenlabs: \n{str(e)}") + return [] # GET /voices/azure def get_azure_voices(self, api_key: str, region: AzureRegion, locale: str = ""): diff --git a/skills/ask_perplexity/default_config.yaml b/skills/ask_perplexity/default_config.yaml new file mode 100644 index 00000000..ff992dfd --- /dev/null +++ b/skills/ask_perplexity/default_config.yaml @@ -0,0 +1,31 @@ +name: AskPerplexity +module: skills.ask_perplexity.main +category: general +description: + en: Uses the Perplexity API to get up-to-date information on a wide range of topics. Perplexity is a paid service, you will need a funded account with an active API key, see https://www.perplexity.ai/settings/api + de: Verwendet die Perplexity-API, um aktuelle Informationen zu einer Vielzahl von Themen zu erhalten. Perplexity ist ein kostenpflichtiger Dienst, ein Konto mit Guthaben und aktiven API key ist notwendig, siehe https://www.perplexity.ai/settings/api +examples: + - question: + en: How is the weather today in Berlin? + de: Wie ist das Wetter heute? + answer: + en: Today, the weather in Berlin is cloudy with a high of 20°C and a ... (more details) + de: Heute ist das Wetter in Berlin bewölkt, mit einer Höchsttemperatur von 20°C und ... (mehr Details) + - question: + en: In Star Citizen mining, what is currently the best way to find quantanium? + de: Beim Mining in Star Citizen, wie finde ich aktuell am besten Quantanium? + answer: + en: To find Quantanium for mining in Star Citizen, your best bet is Lyria, as it offers ... (more details) + de: Um Quantanium im Star Citizen Universum zu finden, ist Lyria der beste Ort, da dort ... (mehr Details) +prompt: | + There is a new function: 'ask_perplexity' + Perplexity is a powerful tool that can provide you with up-to-date information on a wide range of topics. + Use it everytime the user asks a question that implies the need for up-to-date information. + Always use this if no other available skill matches the request better to get up-to-date information. +custom_properties: + - id: instant_response + name: Instant Response + hint: If set, the Perplexity answer will be used instantly and unprocessed. This is faster but will not include format and/or language guidelines set in your wingman. + value: False + required: false + property_type: boolean diff --git a/skills/ask_perplexity/logo.png b/skills/ask_perplexity/logo.png new file mode 100644 index 00000000..3479ef77 Binary files /dev/null and b/skills/ask_perplexity/logo.png differ diff --git a/skills/ask_perplexity/main.py b/skills/ask_perplexity/main.py new file mode 100644 index 00000000..869d3c97 --- /dev/null +++ b/skills/ask_perplexity/main.py @@ -0,0 +1,84 @@ +from api.interface import ( + WingmanInitializationError, +) +from skills.skill_base import Skill + +class AskPerplexity(Skill): + + def __init__( + self, + *args, + **kwargs, + ) -> None: + super().__init__(*args, **kwargs) + + self.instant_response = False + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + if not self.wingman.perplexity: + await self.wingman.validate_and_set_perplexity(errors) + + self.instant_response = self.retrieve_custom_property_value("instant_response", errors) + + return errors + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "ask_perplexity", + { + "type": "function", + "function": { + "name": "ask_perplexity", + "description": "Expects a question that is answered with up-to-date information from the internet.", + "parameters": { + "type": "object", + "properties": { + "question": {"type": "string"}, + }, + "required": ["question"], + }, + }, + + }, + ), + ] + return tools + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + function_response = "" + instant_response = "" + + if tool_name in ["ask_perplexity"]: + if self.settings.debug_mode: + self.start_execution_benchmark() + + if tool_name == "ask_perplexity" and "question" in parameters: + function_response = self.ask_perplexity(parameters["question"]) + if self.instant_response: + instant_response = function_response + + if self.settings.debug_mode: + await self.printr.print_async( + f"Perplexity answer: {function_response}" + ) + await self.print_execution_time() + + return function_response, instant_response + + def ask_perplexity(self, question: str) -> str: + """Uses the Perplexity API to answer a question.""" + + completion = self.wingman.perplexity.ask( + messages=[{"role": "user", "content": question}], + model=self.wingman.config.perplexity.conversation_model.value, + ) + + if completion and completion.choices: + return completion.choices[0].message.content + else: + return "Error: Unable to retrieve a response from Perplexity API." diff --git a/skills/file_manager/default_config.yaml b/skills/file_manager/default_config.yaml index 16dc408b..deb80aaf 100644 --- a/skills/file_manager/default_config.yaml +++ b/skills/file_manager/default_config.yaml @@ -2,11 +2,11 @@ name: FileManager module: skills.file_manager.main category: general description: - en: Manage local files, save, load and create directories. Supports various text-based file formats. - de: Verwalte lokale Dateien, speichere, lade oder erstelle Verzeichnisse. Unterstützt verschiedene text-basierte Formate. + en: Manage local files, save, load and create directories. Supports various text-based file formats and reading PDFs. + de: Verwalte lokale Dateien, speichere, lade und erstelle Verzeichnisse. Unterstützt verschiedene text-basierte Formate und das Lesen von PDFs. hint: - en: - de: + en: + de: examples: - question: en: Save 'Hello, World!' to hello.txt. @@ -26,9 +26,15 @@ examples: answer: en: (creates a directory named 'Projects' in the default directory) de: (erstellt ein Verzeichnis namens 'Projekte' im Standardverzeichnis) + - question: + en: Read page 5 of example.pdf. + de: Lies Seite 5 von example.pdf. + answer: + en: (loads page 5 of example.pdf and reads it into memory) + de: (lädt Seite 5 von example.pdf und liest sie in den Speicher) prompt: | You can also save text to various file formats, load text from files, or create directories as specified by the user. - You support all plain text file formats. + You support reading and writing all plain text file formats and reading PDF files. When adding text to an existing file, you follow these rules: (1) determine if it is appropriate to add a new line before the added text or ask the user if you do not know. (2) only add content to an existing file if you are sure that is what the user wants. @@ -46,4 +52,4 @@ custom_properties: name: Allow overwrite existing files property_type: boolean required: true - value: false + value: false \ No newline at end of file diff --git a/skills/file_manager/main.py b/skills/file_manager/main.py index 653c94ce..d5be988f 100644 --- a/skills/file_manager/main.py +++ b/skills/file_manager/main.py @@ -6,10 +6,11 @@ from skills.skill_base import Skill from services.file import get_writable_dir from showinfm import show_in_file_manager +from pdfminer.high_level import extract_text if TYPE_CHECKING: from wingmen.open_ai_wingman import OpenAiWingman -DEFAULT_MAX_TEXT_SIZE = 15000 +DEFAULT_MAX_TEXT_SIZE = 24000 SUPPORTED_FILE_EXTENSIONS = [ "adoc", "asc", @@ -46,6 +47,7 @@ "m3u", "map", "md", + "pdf", "pyd", "plist", "pl", @@ -63,11 +65,13 @@ "sql", "svg", "ts", + "tscn", "tcl", "tex", "tmpl", "toml", "tpl", + "tres", "tsv", "txt", "vtt", @@ -129,6 +133,10 @@ def get_tools(self) -> list[tuple[str, dict]]: "type": "string", "description": "The directory from where the file should be loaded. Defaults to the configured directory.", }, + "pdf_page_number_to_load": { + "type": "number", + "description": "The page number of a pdf to load, if expressly specified by the user.", + }, }, "required": ["file_name"], }, @@ -237,25 +245,31 @@ async def execute_tool( ) file_name = parameters.get("file_name") directory = parameters.get("directory_path", self.default_directory) + pdf_page_number = parameters.get("pdf_page_number_to_load") if directory == "": directory = self.default_directory if not file_name or file_name == "": function_response = "File name not provided." else: file_extension = file_name.split(".")[-1] - if file_extension not in self.allowed_file_extensions: + if file_extension.lower() not in self.allowed_file_extensions: function_response = f"Unsupported file extension: {file_extension}" else: file_path = os.path.join(directory, file_name) try: - with open(file_path, "r", encoding="utf-8") as file: - file_content = file.read() - if len(file_content) > self.max_text_size: - function_response = ( - "File content exceeds the maximum allowed size." - ) - else: - function_response = f"File content loaded from {file_path}:\n{file_content}" + # if PDF, use pdfminer.six's extract text to read (optionally passing the specific page to read - zero-indexed so subtract 1), otherwise open and parse file + file_content = "" + if file_extension.lower() == "pdf": + file_content = extract_text(file_path, page_numbers=[pdf_page_number-1]) if pdf_page_number else extract_text(file_path) + else: + with open(file_path, "r", encoding="utf-8") as file: + file_content = file.read() + if len(file_content) > self.max_text_size: + function_response = ( + "File content exceeds the maximum allowed size." + ) + else: + function_response = f"File content loaded from {file_path}:\n{file_content}" except FileNotFoundError: function_response = ( f"File '{file_name}' not found in '{directory}'." @@ -282,12 +296,12 @@ async def execute_tool( function_response = "File name or text content not provided." else: file_extension = file_name.split(".")[-1] - if file_extension not in self.allowed_file_extensions: + if file_extension.lower() not in self.allowed_file_extensions: file_name += f".{self.default_file_extension}" if len(text_content) > self.max_text_size: function_response = "Text content exceeds the maximum allowed size." else: - if file_extension == "json": + if file_extension.lower() == "json": try: json_content = json.loads(text_content) text_content = json.dumps(json_content, indent=4) diff --git a/skills/file_manager/requirements.txt b/skills/file_manager/requirements.txt new file mode 100644 index 00000000..7f9421c5 Binary files /dev/null and b/skills/file_manager/requirements.txt differ diff --git a/skills/image_generation/default_config.yaml b/skills/image_generation/default_config.yaml new file mode 100644 index 00000000..36aa62ac --- /dev/null +++ b/skills/image_generation/default_config.yaml @@ -0,0 +1,18 @@ +name: ImageGeneration +module: skills.image_generation.main +category: general +description: + en: Use Wingamn AI to generate images based on your input. It uses DALL-E 3. + de: Verwende Wingamn AI, um Bilder basierend auf deinen Eingaben zu generieren. Es verwendet DALL-E 3. +# hint: +# en: +# de: +examples: + - question: + en: Generate an image of a cat. + de: Generiere ein Bild einer Katze. + answer: + en: Here is an image of a cat. + de: Hier ist ein Bild einer Katze. +prompt: | + You can also generate images. diff --git a/skills/image_generation/logo.png b/skills/image_generation/logo.png new file mode 100644 index 00000000..15ef76b5 Binary files /dev/null and b/skills/image_generation/logo.png differ diff --git a/skills/image_generation/main.py b/skills/image_generation/main.py new file mode 100644 index 00000000..13aeb355 --- /dev/null +++ b/skills/image_generation/main.py @@ -0,0 +1,71 @@ +from typing import TYPE_CHECKING +from api.enums import LogSource, LogType +from api.interface import SettingsConfig, SkillConfig, WingmanInitializationError +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class ImageGeneration(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + return errors + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + instant_response = "" + function_response = "I can't generate an image, sorry. Try another provider." + + if tool_name == "generate_image": + prompt = parameters["prompt"] + if self.settings.debug_mode: + await self.printr.print_async(f"Generate image with prompt: {prompt}.", color=LogType.INFO) + image = await self.wingman.generate_image(prompt) + await self.printr.print_async( + "", + color=LogType.INFO, + source=LogSource.WINGMAN, + source_name=self.wingman.name, + skill_name=self.name, + additional_data={"image_url": image}, + ) + function_response = "Here is an image based on your prompt." + return function_response, instant_response + + async def is_waiting_response_needed(self, tool_name: str) -> bool: + return True + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "generate_image", + { + "type": "function", + "function": { + "name": "generate_image", + "description": "Generate an image based on the users prompt.", + "parameters": { + "type": "object", + "properties": { + "prompt": {"type": "string"}, + }, + "required": ["prompt"], + }, + }, + }, + ), + ] + + return tools diff --git a/skills/msfs2020_control/default_config.yaml b/skills/msfs2020_control/default_config.yaml new file mode 100644 index 00000000..b153588e --- /dev/null +++ b/skills/msfs2020_control/default_config.yaml @@ -0,0 +1,632 @@ +name: Msfs2020Control +module: skills.msfs2020_control.main +category: flight_simulator +description: + en: Control and retrieve data from MSFS2020 dynamically using SimConnect. + de: Steuern und Abrufen von Daten aus MSFS2020 dynamisch mit SimConnect. +hint: + en: This skill can retrieve data and perform actions in the MSFS2020 simulator based on dynamic inputs. + de: Dieser Skill kann Daten abrufen und Aktionen im MSFS2020-Simulator basierend auf dynamischen Eingaben ausführen. +examples: + - question: + en: What is my current altitude? + de: Wie hoch bin ich gerade? + answer: + en: (retrieves 'PLANE_ALTITUDE' from MSFS2020) + de: (ruft 'PLANE_ALTITUDE' aus MSFS2020 ab) + - question: + en: Start the engine. + de: Starte den Motor. + answer: + en: (executes the engine start sequence in MSFS2020) + de: (führt die Motorstartsequenz in MSFS2020 aus) +prompt: | + You can interact with the MSFS2020 simulator using the Python SimConnect API. + To retrieve data, use 'get_data_from_sim'. To set data or perform actions, use 'set_data_or_perform_action_in_sim'. + Example: + User: Set throttle to full! + Response: (uses the set_data_or_perform_action_in_sim function with THROTTLE_FULL as the action) + User: Let's raise the flaps. + Response: (uses the set_data_or_perform_action_in_sim function with FLAPS_UP as the action) + User: How far are we above ground right now? + Response: (uses the get_data_from_sim function with PLANE_ALT_ABOVE_GROUND as the data point) + Here are some dictionaries with common action names and their explanations for use with the set_data_or_perform_action_in_sim function: + Engine Events: + { + "THROTTLE_FULL": "Set throttles max", + "THROTTLE_INCR": "Increment throttles", + "THROTTLE_INCR_SMALL": "Increment throttles small", + "THROTTLE_DECR": "Decrement throttles", + "THROTTLE_DECR_SMALL": "Decrease throttles small", + "THROTTLE_CUT": "Set throttles to idle", + "INCREASE_THROTTLE": "Increment throttles", + "DECREASE_THROTTLE": "Decrement throttles", + "THROTTLE_SET": "Set throttles exactly (0- 16383),", + "THROTTLE1_FULL": "Set throttle 1 max", + "THROTTLE1_INCR": "Increment throttle 1", + "THROTTLE1_INCR_SMALL": "Increment throttle 1 small", + "THROTTLE1_DECR": "Decrement throttle 1", + "THROTTLE1_CUT": "Set throttle 1 to idle", + "THROTTLE2_FULL": "Set throttle 2 max", + "THROTTLE2_INCR": "Increment throttle 2", + "THROTTLE2_INCR_SMALL": "Increment throttle 2 small", + "THROTTLE2_DECR": "Decrement throttle 2", + "THROTTLE2_CUT": "Set throttle 2 to idle", + "THROTTLE3_FULL": "Set throttle 3 max", + "THROTTLE3_INCR": "Increment throttle 3", + "THROTTLE3_INCR_SMALL": "Increment throttle 3 small", + "THROTTLE3_DECR": "Decrement throttle 3", + "THROTTLE3_CUT": "Set throttle 3 to idle", + "THROTTLE4_FULL": "Set throttle 1 max", + "THROTTLE4_INCR": "Increment throttle 4", + "THROTTLE4_INCR_SMALL": "Increment throttle 4 small", + "THROTTLE4_DECR": "Decrement throttle 4", + "THROTTLE4_CUT": "Set throttle 4 to idle", + "THROTTLE_10": "Set throttles to 10%", + "THROTTLE_20": "Set throttles to 20%", + "THROTTLE_30": "Set throttles to 30%", + "THROTTLE_40": "Set throttles to 40%", + "THROTTLE_50": "Set throttles to 50%", + "THROTTLE_60": "Set throttles to 60%", + "THROTTLE_70": "Set throttles to 70%", + "THROTTLE_80": "Set throttles to 80%", + "THROTTLE_90": "Set throttles to 90%", + "THROTTLE1_DECR_SMALL": "Decrease throttle 1 small", + "THROTTLE2_DECR_SMALL": "Decrease throttle 2 small", + "THROTTLE3_DECR_SMALL": "Decrease throttle 3 small", + "THROTTLE4_DECR_SMALL": "Decrease throttle 4 small", + "PROP_PITCH_DECR_SMALL": "Decrease prop levers small", + "PROP_PITCH1_DECR_SMALL": "Decrease prop lever 1 small", + "PROP_PITCH2_DECR_SMALL": "Decrease prop lever 2 small", + "PROP_PITCH3_DECR_SMALL": "Decrease prop lever 3 small", + "PROP_PITCH4_DECR_SMALL": "Decrease prop lever 4 small", + "MIXTURE1_RICH": "Set mixture lever 1 to max rich", + "MIXTURE1_INCR": "Increment mixture lever 1", + "MIXTURE1_INCR_SMALL": "Increment mixture lever 1 small", + "MIXTURE1_DECR": "Decrement mixture lever 1", + "MIXTURE1_LEAN": "Set mixture lever 1 to max lean", + "MIXTURE2_RICH": "Set mixture lever 2 to max rich", + "MIXTURE2_INCR": "Increment mixture lever 2", + "MIXTURE2_INCR_SMALL": "Increment mixture lever 2 small", + "MIXTURE2_DECR": "Decrement mixture lever 2", + "MIXTURE2_LEAN": "Set mixture lever 2 to max lean", + "MIXTURE3_RICH": "Set mixture lever 3 to max rich", + "MIXTURE3_INCR": "Increment mixture lever 3", + "MIXTURE3_INCR_SMALL": "Increment mixture lever 3 small", + "MIXTURE3_DECR": "Decrement mixture lever 3", + "MIXTURE3_LEAN": "Set mixture lever 3 to max lean", + "MIXTURE4_RICH": "Set mixture lever 4 to max rich", + "MIXTURE4_INCR": "Increment mixture lever 4", + "MIXTURE4_INCR_SMALL": "Increment mixture lever 4 small", + "MIXTURE4_DECR": "Decrement mixture lever 4", + "MIXTURE4_LEAN": "Set mixture lever 4 to max lean", + "MIXTURE_RICH": "Set mixture levers to max rich", + "MIXTURE_INCR": "Increment mixture levers", + "MIXTURE_INCR_SMALL": "Increment mixture levers small", + "MIXTURE_DECR": "Decrement mixture levers", + "MIXTURE_LEAN": "Set mixture levers to max lean", + "MIXTURE1_SET": "Set mixture lever 1 exact value (0 to 16383),", + "MIXTURE2_SET": "Set mixture lever 2 exact value (0 to 16383),", + "MIXTURE3_SET": "Set mixture lever 3 exact value (0 to 16383),", + "MIXTURE4_SET": "Set mixture lever 4 exact value (0 to 16383),", + "MIXTURE_SET_BEST": "Set mixture levers to current best power setting", + "MIXTURE_DECR_SMALL": "Decrement mixture levers small", + "MIXTURE1_DECR_SMALL": "Decrement mixture lever 1 small", + "MIXTURE2_DECR_SMALL": "Decrement mixture lever 4 small", + "MIXTURE3_DECR_SMALL": "Decrement mixture lever 4 small", + "MIXTURE4_DECR_SMALL": "Decrement mixture lever 4 small", + "PROP_PITCH_SET": "Set prop pitch levers (0 to 16383),", + "PROP_PITCH_LO": "Set prop pitch levers max (lo pitch),", + "PROP_PITCH_INCR": "Increment prop pitch levers", + "PROP_PITCH_INCR_SMALL": "Increment prop pitch levers small", + "PROP_PITCH_DECR": "Decrement prop pitch levers", + "PROP_PITCH_HI": "Set prop pitch levers min (hi pitch),", + "PROP_PITCH1_LO": "Set prop pitch lever 1 max (lo pitch),", + "PROP_PITCH1_INCR": "Increment prop pitch lever 1", + "PROP_PITCH1_INCR_SMALL": "Increment prop pitch lever 1 small", + "PROP_PITCH1_DECR": "Decrement prop pitch lever 1", + "PROP_PITCH1_HI": "Set prop pitch lever 1 min (hi pitch),", + "PROP_PITCH2_LO": "Set prop pitch lever 2 max (lo pitch),", + "PROP_PITCH2_INCR": "Increment prop pitch lever 2", + "PROP_PITCH2_INCR_SMALL": "Increment prop pitch lever 2 small", + "PROP_PITCH2_DECR": "Decrement prop pitch lever 2", + "PROP_PITCH2_HI": "Set prop pitch lever 2 min (hi pitch),", + "PROP_PITCH3_LO": "Set prop pitch lever 3 max (lo pitch),", + "PROP_PITCH3_INCR": "Increment prop pitch lever 3", + "PROP_PITCH3_INCR_SMALL": "Increment prop pitch lever 3 small", + "PROP_PITCH3_DECR": "Decrement prop pitch lever 3", + "PROP_PITCH3_HI": "Set prop pitch lever 3 min (hi pitch),", + "PROP_PITCH4_LO": "Set prop pitch lever 4 max (lo pitch),", + "PROP_PITCH4_INCR": "Increment prop pitch lever 4", + "PROP_PITCH4_INCR_SMALL": "Increment prop pitch lever 4 small", + "PROP_PITCH4_DECR": "Decrement prop pitch lever 4", + "PROP_PITCH4_HI": "Set prop pitch lever 4 min (hi pitch),", + "JET_STARTER": "Selects jet engine starter (for +/- sequence),", + "MAGNETO_SET": "Sets magnetos (0,1),", + "TOGGLE_STARTER1": "Toggle starter 1", + "TOGGLE_STARTER2": "Toggle starter 2", + "TOGGLE_STARTER3": "Toggle starter 3", + "TOGGLE_STARTER4": "Toggle starter 4", + "TOGGLE_ALL_STARTERS": "Toggle starters", + "ENGINE_AUTO_START": "Triggers auto-start", + "ENGINE_AUTO_SHUTDOWN": "Triggers auto-shutdown", + "MAGNETO": "Selects magnetos (for +/- sequence),", + "MAGNETO_DECR": "Decrease magneto switches positions", + "MAGNETO_INCR": "Increase magneto switches positions", + "MAGNETO1_OFF": "Set engine 1 magnetos off", + "MAGNETO1_RIGHT": "Toggle engine 1 right magneto", + "MAGNETO1_LEFT": "Toggle engine 1 left magneto", + "MAGNETO1_BOTH": "Set engine 1 magnetos on", + "MAGNETO1_START": "Set engine 1 magnetos on and toggle starter", + "MAGNETO2_OFF": "Set engine 2 magnetos off", + "MAGNETO2_RIGHT": "Toggle engine 2 right magneto", + "MAGNETO2_LEFT": "Toggle engine 2 left magneto", + "MAGNETO2_BOTH": "Set engine 2 magnetos on", + "MAGNETO2_START": "Set engine 2 magnetos on and toggle starter", + "MAGNETO3_OFF": "Set engine 3 magnetos off", + "MAGNETO3_RIGHT": "Toggle engine 3 right magneto", + "MAGNETO3_LEFT": "Toggle engine 3 left magneto", + "MAGNETO3_BOTH": "Set engine 3 magnetos on", + "MAGNETO3_START": "Set engine 3 magnetos on and toggle starter", + "MAGNETO4_OFF": "Set engine 4 magnetos off", + "MAGNETO4_RIGHT": "Toggle engine 4 right magneto", + "MAGNETO4_LEFT": "Toggle engine 4 left magneto", + "MAGNETO4_BOTH": "Set engine 4 magnetos on", + "MAGNETO4_START": "Set engine 4 magnetos on and toggle starter", + "MAGNETO_OFF": "Set engine magnetos off", + "MAGNETO_RIGHT": "Set engine right magnetos on", + "MAGNETO_LEFT": "Set engine left magnetos on", + "MAGNETO_BOTH": "Set engine magnetos on", + "MAGNETO_START": "Set engine magnetos on and toggle starters", + "MAGNETO1_DECR": "Decrease engine 1 magneto switch position", + "MAGNETO1_INCR": "Increase engine 1 magneto switch position", + "MAGNETO2_DECR": "Decrease engine 2 magneto switch position", + "MAGNETO2_INCR": "Increase engine 2 magneto switch position", + "MAGNETO3_DECR": "Decrease engine 3 magneto switch position", + "MAGNETO3_INCR": "Increase engine 3 magneto switch position", + "MAGNETO4_DECR": "Decrease engine 4 magneto switch position", + "MAGNETO4_INCR": "Increase engine 4 magneto switch position", + "MAGNETO1_SET": "Set engine 1 magneto switch", + "MAGNETO2_SET": "Set engine 2 magneto switch", + "MAGNETO3_SET": "Set engine 3 magneto switch", + "MAGNETO4_SET": "Set engine 4 magneto switch", + "ANTI_ICE_ON": "Sets anti-ice switches on", + "ANTI_ICE_OFF": "Sets anti-ice switches off", + "ANTI_ICE_TOGGLE_ENG1": "Toggle engine 1 anti-ice switch", + "ANTI_ICE_TOGGLE_ENG2": "Toggle engine 2 anti-ice switch", + "ANTI_ICE_TOGGLE_ENG3": "Toggle engine 3 anti-ice switch", + "ANTI_ICE_TOGGLE_ENG4": "Toggle engine 4 anti-ice switch",, + "TOGGLE_FUEL_VALVE_ALL": "Toggle engine fuel valves", + "TOGGLE_FUEL_VALVE_ENG1": "Toggle engine 1 fuel valve", + "TOGGLE_FUEL_VALVE_ENG2": "Toggle engine 2 fuel valve", + "TOGGLE_FUEL_VALVE_ENG3": "Toggle engine 3 fuel valve", + "TOGGLE_FUEL_VALVE_ENG4": "Toggle engine 4 fuel valve", + "INC_COWL_FLAPS": "Increment cowl flap levers", + "DEC_COWL_FLAPS": "Decrement cowl flap levers", + "INC_COWL_FLAPS1": "Increment engine 1 cowl flap lever", + "DEC_COWL_FLAPS1": "Decrement engine 1 cowl flap lever", + "INC_COWL_FLAPS2": "Increment engine 2 cowl flap lever", + "DEC_COWL_FLAPS2": "Decrement engine 2 cowl flap lever", + "INC_COWL_FLAPS3": "Increment engine 3 cowl flap lever", + "DEC_COWL_FLAPS3": "Decrement engine 3 cowl flap lever", + "INC_COWL_FLAPS4": "Increment engine 4 cowl flap lever", + "DEC_COWL_FLAPS4": "Decrement engine 4 cowl flap lever", + "FUEL_PUMP": "Toggle electric fuel pumps", + "TOGGLE_ELECT_FUEL_PUMP": "Toggle electric fuel pumps", + "TOGGLE_ELECT_FUEL_PUMP1": "Toggle engine 1 electric fuel pump", + "TOGGLE_ELECT_FUEL_PUMP2": "Toggle engine 2 electric fuel pump", + "TOGGLE_ELECT_FUEL_PUMP3": "Toggle engine 3 electric fuel pump", + "TOGGLE_ELECT_FUEL_PUMP4": "Toggle engine 4 electric fuel pump", + "ENGINE_PRIMER": "Trigger engine primers", + "TOGGLE_PRIMER": "Trigger engine primers", + "TOGGLE_PRIMER1": "Trigger engine 1 primer", + "TOGGLE_PRIMER2": "Trigger engine 2 primer", + "TOGGLE_PRIMER3": "Trigger engine 3 primer", + "TOGGLE_PRIMER4": "Trigger engine 4 primer", + "TOGGLE_FEATHER_SWITCHES": "Trigger propeller switches", + "TOGGLE_FEATHER_SWITCH_1": "Trigger propeller 1 switch", + "TOGGLE_FEATHER_SWITCH_2": "Trigger propeller 2 switch", + "TOGGLE_FEATHER_SWITCH_3": "Trigger propeller 3 switch", + "TOGGLE_FEATHER_SWITCH_4": "Trigger propeller 4 switch", + "TOGGLE_PROPELLER_SYNC": "Turns propeller synchronization switch on", + "TOGGLE_AUTOFEATHER_ARM": "Turns auto-feather arming switch on.", + "TOGGLE_AFTERBURNER": "Toggles afterburners", + "ENGINE": "Sets engines for 1,2,3,4 selection (to be followed by SELECT_n)," + } + Flight Controls Events: + { + "FLAPS_UP": "Sets flap handle to full retract position", + "FLAPS_1": "Sets flap handle to first extension position", + "FLAPS_2": "Sets flap handle to second extension position", + "FLAPS_3": "Sets flap handle to third extension position", + "FLAPS_DOWN": "Sets flap handle to full extension position", + "ELEV_TRIM_DN": "Increments elevator trim down", + "ELEV_DOWN": "Increments elevator down", + "AILERONS_LEFT": "Increments ailerons left", + "CENTER_AILER_RUDDER": "Centers aileron and rudder positions", + "AILERONS_RIGHT": "Increments ailerons right", + "ELEV_TRIM_UP": "Increment elevator trim up", + "ELEV_UP": "Increments elevator up", + "RUDDER_LEFT": "Increments rudder left", + "RUDDER_CENTER": "Centers rudder position", + "RUDDER_RIGHT": "Increments rudder right", + "ELEVATOR_SET": "Sets elevator position (-16383 - +16383),", + "AILERON_SET": "Sets aileron position (-16383 - +16383),", + "RUDDER_SET": "Sets rudder position (-16383 - +16383),", + "FLAPS_INCR": "Increments flap handle position", + "FLAPS_DECR": "Decrements flap handle position", + "SPOILERS_ON": "Sets spoiler handle to full extend position", + "SPOILERS_OFF": "Sets spoiler handle to full retract position", + "SPOILERS_ARM_ON": "Sets auto-spoiler arming on", + "SPOILERS_ARM_OFF": "Sets auto-spoiler arming off", + "AILERON_TRIM_LEFT": "Increments aileron trim left", + "AILERON_TRIM_RIGHT": "Increments aileron trim right", + "RUDDER_TRIM_LEFT": "Increments rudder trim left", + "RUDDER_TRIM_RIGHT": "Increments aileron trim right", + "ELEVATOR_TRIM_SET": "Sets elevator trim position (0 to 16383),", + } + Autopilot and Avionics Events: + { + "AP_MASTER": "Toggles AP on/off", + "AUTOPILOT_OFF": "Turns AP off", + "AUTOPILOT_ON": "Turns AP on", + "YAW_DAMPER_TOGGLE": "Toggles yaw damper on/off", + "AP_PANEL_HEADING_HOLD": "Toggles heading hold mode on/off", + "AP_PANEL_ALTITUDE_HOLD": "Toggles altitude hold mode on/off", + "AP_ATT_HOLD_ON": "Turns on AP wing leveler and pitch hold mode", + "AP_LOC_HOLD_ON": "Turns AP localizer hold on/armed and glide-slope hold mode off", + "AP_APR_HOLD_ON": "Turns both AP localizer and glide-slope modes on/armed", + "AP_HDG_HOLD_ON": "Turns heading hold mode on", + "AP_ALT_HOLD_ON": "Turns altitude hold mode on", + "AP_WING_LEVELER_ON": "Turns wing leveler mode on", + "AP_BC_HOLD_ON": "Turns localizer back course hold mode on/armed", + "AP_NAV1_HOLD_ON": "Turns lateral hold mode on", + "AP_ATT_HOLD_OFF": "Turns off attitude hold mode", + "AP_LOC_HOLD_OFF": "Turns off localizer hold mode", + "AP_APR_HOLD_OFF": "Turns off approach hold mode", + "AP_HDG_HOLD_OFF": "Turns off heading hold mode", + "AP_ALT_HOLD_OFF": "Turns off altitude hold mode", + "AP_WING_LEVELER_OFF": "Turns off wing leveler mode", + "AP_BC_HOLD_OFF": "Turns off backcourse mode for localizer hold", + "AP_NAV1_HOLD_OFF": "Turns off nav hold mode", + "AP_AIRSPEED_HOLD": "Toggles airspeed hold mode", + "AUTO_THROTTLE_ARM": "Toggles autothrottle arming mode", + "AUTO_THROTTLE_TO_GA": "Toggles Takeoff/Go Around mode", + "HEADING_BUG_INC": "Increments heading hold reference bug", + "HEADING_BUG_DEC": "Decrements heading hold reference bug", + "HEADING_BUG_SET": "Set heading hold reference bug (degrees),", + "AP_PANEL_SPEED_HOLD": "Toggles airspeed hold mode", + "AP_ALT_VAR_INC": "Increments reference altitude", + "AP_ALT_VAR_DEC": "Decrements reference altitude", + "AP_VS_VAR_INC": "Increments vertical speed reference", + "AP_VS_VAR_DEC": "Decrements vertical speed reference", + "AP_SPD_VAR_INC": "Increments airspeed hold reference", + "AP_SPD_VAR_DEC": "Decrements airspeed hold reference", + "AP_PANEL_MACH_HOLD": "Toggles mach hold", + "AP_MACH_VAR_INC": "Increments reference mach", + "AP_MACH_VAR_DEC": "Decrements reference mach", + "AP_MACH_HOLD": "Toggles mach hold", + "AP_ALT_VAR_SET_METRIC": "Sets reference altitude in meters", + "AP_VS_VAR_SET_ENGLISH": "Sets reference vertical speed in feet per minute", + "AP_SPD_VAR_SET": "Sets airspeed reference in knots", + "AP_MACH_VAR_SET": "Sets mach reference", + "YAW_DAMPER_ON": "Turns yaw damper on", + "YAW_DAMPER_OFF": "Turns yaw damper off", + "YAW_DAMPER_SET": "Sets yaw damper on/off (1,0),", + "AP_AIRSPEED_ON": "Turns airspeed hold on", + "AP_AIRSPEED_OFF": "Turns airspeed hold off", + "AP_MACH_ON": "Turns mach hold on", + "AP_MACH_OFF": "Turns mach hold off", + "AP_MACH_SET": "Sets mach hold on/off (1,0),", + "AP_PANEL_ALTITUDE_ON": "Turns altitude hold mode on (without capturing current altitude),", + "AP_PANEL_ALTITUDE_OFF": "Turns altitude hold mode off", + "AP_PANEL_HEADING_ON": "Turns heading mode on (without capturing current heading),", + "AP_PANEL_HEADING_OFF": "Turns heading mode off", + "AP_PANEL_MACH_ON": "Turns on mach hold", + "AP_PANEL_MACH_OFF": "Turns off mach hold", + "AP_PANEL_SPEED_ON": "Turns on speed hold mode", + "AP_PANEL_SPEED_OFF": "Turns off speed hold mode", + "AP_ALT_VAR_SET_ENGLISH": "Sets altitude reference in feet", + "AP_VS_VAR_SET_METRIC": "Sets vertical speed reference in meters per minute", + "TOGGLE_FLIGHT_DIRECTOR": "Toggles flight director on/off", + "SYNC_FLIGHT_DIRECTOR_PITCH": "Synchronizes flight director pitch with current aircraft pitch", + "INCREASE_AUTOBRAKE_CONTROL": "Increments autobrake level", + "DECREASE_AUTOBRAKE_CONTROL": "Decrements autobrake level", + "AP_PANEL_SPEED_HOLD_TOGGLE": "Turns airspeed hold mode on with current airspeed", + "AP_PANEL_MACH_HOLD_TOGGLE": "Sets mach hold reference to current mach", + "AP_NAV_SELECT_SET": "Sets the nav (1 or 2), which is used by the Nav hold modes", + "HEADING_BUG_SELECT": "Selects the heading bug for use with +/-", + "ALTITUDE_BUG_SELECT": "Selects the altitude reference for use with +/-", + "VSI_BUG_SELECT": "Selects the vertical speed reference for use with +/-", + "AIRSPEED_BUG_SELECT": "Selects the airspeed reference for use with +/-", + "AP_PITCH_REF_INC_UP": "Increments the pitch reference for pitch hold mode", + "AP_PITCH_REF_INC_DN": "Decrements the pitch reference for pitch hold mode", + "AP_PITCH_REF_SELECT": "Selects pitch reference for use with +/-", + "AP_ATT_HOLD": "Toggle attitude hold mode", + "AP_LOC_HOLD": "Toggles localizer (only), hold mode", + "AP_APR_HOLD": "Toggles approach hold (localizer and glide-slope),", + "AP_HDG_HOLD": "Toggles heading hold mode", + "AP_ALT_HOLD": "Toggles altitude hold mode", + "AP_WING_LEVELER": "Toggles wing leveler mode", + "AP_BC_HOLD": "Toggles the backcourse mode for the localizer hold", + "AP_NAV1_HOLD": "Toggles the nav hold mode", + "AP_MAX_BANK_INC": "Autopilot max bank angle increment.", + "AP_MAX_BANK_DEC": "Autopilot max bank angle decrement.", + "AP_N1_HOLD": "Autopilot, hold the N1 percentage at its current level.", + "AP_N1_REF_INC": "Increment the autopilot N1 reference.", + "AP_N1_REF_DEC": "Decrement the autopilot N1 reference.", + "AP_N1_REF_SET": "Sets the autopilot N1 reference.", + "FLY_BY_WIRE_ELAC_TOGGLE": "Turn on or off the fly by wire Elevators and Ailerons computer.", + "FLY_BY_WIRE_FAC_TOGGLE": "Turn on or off the fly by wire Flight Augmentation computer.", + "FLY_BY_WIRE_SEC_TOGGLE": "Turn on or off the fly by wire Spoilers and Elevators computer.", + "AP_VS_HOLD": "Toggle VS hold mode", + "FLIGHT_LEVEL_CHANGE": "Toggle FLC mode", + "COM_RADIO_SET": "Sets COM frequency (Must convert integer to BCD16 Hz),", + "NAV1_RADIO_SET": "Sets NAV 1 frequency (Must convert integer to BCD16 Hz),", + "NAV2_RADIO_SET": "Sets NAV 2 frequency (Must convert integer to BCD16 Hz),", + "ADF_SET": "Sets ADF frequency (Must convert integer to BCD16 Hz),", + "XPNDR_SET": "Sets transponder code (Must convert integer to BCD16),", + "VOR1_SET": "Sets OBS 1 (0 to 360),", + "VOR2_SET": "Sets OBS 2 (0 to 360),", + "DME1_TOGGLE": "Sets DME display to Nav 1", + "DME2_TOGGLE": "Sets DME display to Nav 2", + "RADIO_VOR1_IDENT_DISABLE": "Turns NAV 1 ID off", + "RADIO_VOR2_IDENT_DISABLE": "Turns NAV 2 ID off", + "RADIO_DME1_IDENT_DISABLE": "Turns DME 1 ID off", + "RADIO_DME2_IDENT_DISABLE": "Turns DME 2 ID off", + "RADIO_ADF_IDENT_DISABLE": "Turns ADF 1 ID off", + "RADIO_VOR1_IDENT_ENABLE": "Turns NAV 1 ID on", + "RADIO_VOR2_IDENT_ENABLE": "Turns NAV 2 ID on", + "RADIO_DME1_IDENT_ENABLE": "Turns DME 1 ID on", + "RADIO_DME2_IDENT_ENABLE": "Turns DME 2 ID on", + "RADIO_ADF_IDENT_ENABLE": "Turns ADF 1 ID on", + "RADIO_VOR1_IDENT_TOGGLE": "Toggles NAV 1 ID", + "RADIO_VOR2_IDENT_TOGGLE": "Toggles NAV 2 ID", + "RADIO_DME1_IDENT_TOGGLE": "Toggles DME 1 ID", + "RADIO_DME2_IDENT_TOGGLE": "Toggles DME 2 ID", + "RADIO_ADF_IDENT_TOGGLE": "Toggles ADF 1 ID", + "RADIO_VOR1_IDENT_SET": "Sets NAV 1 ID (on/off),", + "RADIO_VOR2_IDENT_SET": "Sets NAV 2 ID (on/off),", + "RADIO_DME1_IDENT_SET": "Sets DME 1 ID (on/off),", + "RADIO_DME2_IDENT_SET": "Sets DME 2 ID (on/off),", + "RADIO_ADF_IDENT_SET": "Sets ADF 1 ID (on/off),", + "ADF_CARD_INC": "Increments ADF card", + "ADF_CARD_DEC": "Decrements ADF card", + "ADF_CARD_SET": "Sets ADF card (0-360),", + "TOGGLE_DME": "Toggles between NAV 1 and NAV 2", + "TOGGLE_AVIONICS_MASTER": "Toggles the avionics master switch", + "COM_STBY_RADIO_SET": "Sets COM 1 standby frequency (Must convert integer to BCD16 Hz),", + "COM_STBY_RADIO_SWAP": "Swaps COM 1 frequency with standby", + "COM2_RADIO_SET": "Sets COM 2 frequency (BCD Hz),", + "COM2_STBY_RADIO_SET": "Sets COM 2 standby frequency (Must convert integer to BCD16 Hz),", + "COM2_RADIO_SWAP": "Swaps COM 2 frequency with standby", + "NAV1_STBY_SET": "Sets NAV 1 standby frequency (Must convert integer to BCD16 Hz),", + "NAV1_RADIO_SWAP": "Swaps NAV 1 frequency with standby", + "NAV2_STBY_SET": "Sets NAV 2 standby frequency (Must convert integer to BCD16 Hz),", + "NAV2_RADIO_SWAP": "Swaps NAV 2 frequency with standby", + "COM1_TRANSMIT_SELECT": "Selects COM 1 to transmit", + "COM2_TRANSMIT_SELECT": "Selects COM 2 to transmit", + "COM_RECEIVE_ALL_TOGGLE": "Toggles all COM radios to receive on", + "MARKER_SOUND_TOGGLE": "Toggles marker beacon sound on/off", + "ADF_COMPLETE_SET": "Sets ADF 1 frequency (Must convert integer to BCD16 Hz),", + "RADIO_ADF2_IDENT_DISABLE": "Turns ADF 2 ID off", + "RADIO_ADF2_IDENT_ENABLE": "Turns ADF 2 ID on", + "RADIO_ADF2_IDENT_TOGGLE": "Toggles ADF 2 ID", + "RADIO_ADF2_IDENT_SET": "Sets ADF 2 ID on/off (1,0),", + "FREQUENCY_SWAP": "Swaps frequency with standby on whichever NAV or COM radio is selected.", + "TOGGLE_GPS_DRIVES_NAV1": "Toggles between GPS and NAV 1 driving NAV 1 OBS display (and AP),", + "GPS_ACTIVATE_BUTTON": "Activates GPS Autopilot mode", + "GPS_POWER_BUTTON": "Toggles power button", + "GPS_NEAREST_BUTTON": "Selects Nearest Airport Page", + "GPS_OBS_BUTTON": "Toggles automatic sequencing of waypoints", + "GPS_MSG_BUTTON": "Toggles the Message Page", + "GPS_MSG_BUTTON_DOWN": "Triggers the pressing of the message button.", + "GPS_MSG_BUTTON_UP": "Triggers the release of the message button", + "GPS_FLIGHTPLAN_BUTTON": "Displays the programmed flightplan.", + "GPS_TERRAIN_BUTTON": "Displays terrain information on default display", + "GPS_PROCEDURE_BUTTON": "Displays the approach procedure page.", + "GPS_ZOOMIN_BUTTON": "Zooms in default display", + "GPS_ZOOMOUT_BUTTON": "Zooms out default display", + "GPS_DIRECTTO_BUTTON": "Brings up the \"Direct To\" page", + "GPS_MENU_BUTTON": "Brings up page to select active legs in a flightplan.", + "GPS_CLEAR_BUTTON": "Clears entered data on a page", + "GPS_CLEAR_ALL_BUTTON": "Clears all data immediately", + "GPS_CLEAR_BUTTON_DOWN": "Triggers the pressing of the Clear button", + "GPS_CLEAR_BUTTON_UP": "Triggers the release of the Clear button.", + "GPS_ENTER_BUTTON": "Approves entered data.", + "GPS_CURSOR_BUTTON": "Selects GPS cursor", + "GPS_GROUP_KNOB_INC": "Increments cursor", + "GPS_GROUP_KNOB_DEC": "Decrements cursor", + "GPS_PAGE_KNOB_INC": "Increments through pages", + "GPS_PAGE_KNOB_DEC": "Decrements through pages", + "DME_SELECT": "Selects one of the two DME systems (1,2),.", + "KOHLSMAN_SET": "Sets altimeter setting (Millibars * 16),", + "BAROMETRIC": "Syncs altimeter setting to sea level pressure, or 29.92 if above 18000 feet", + } + ATC Events: + { + "ATC": "Activates ATC window", + "ATC_MENU_1": "Selects ATC option 1, use other numbers for other options 2-10", + } + Other Miscellaneous Events: + { + "PARKING_BRAKES": "Toggles parking brake on/off", + "GEAR_PUMP": "Increments emergency gear extension", + "PITOT_HEAT_ON": "Turns pitot heat switch on", + "PITOT_HEAT_OFF": "Turns pitot heat switch off", + "GEAR_UP": "Sets gear handle in UP position", + "GEAR_DOWN": "Sets gear handle in DOWN position", + "TOGGLE_MASTER_BATTERY": "Toggles main battery switch", + "TOGGLE_MASTER_ALTERNATOR": "Toggles main alternator/generator switch", + "TOGGLE_ELECTRIC_VACUUM_PUMP": "Toggles backup electric vacuum pump", + "TOGGLE_ALTERNATE_STATIC": "Toggles alternate static pressure port", + "DECREASE_DECISION_HEIGHT": "Decrements decision height reference", + "INCREASE_DECISION_HEIGHT": "Increments decision height reference", + "TOGGLE_STRUCTURAL_DEICE": "Toggles structural deice switch", + "TOGGLE_PROPELLER_DEICE": "Toggles propeller deice switch", + "TOGGLE_ALTERNATOR1": "Toggles alternator/generator 1 switch", + "TOGGLE_ALTERNATOR2": "Toggles alternator/generator 2 switch", + "TOGGLE_ALTERNATOR3": "Toggles alternator/generator 3 switch", + "TOGGLE_ALTERNATOR4": "Toggles alternator/generator 4 switch", + "TOGGLE_MASTER_BATTERY_ALTERNATOR": "Toggles master battery and alternator switch", + "TOGGLE_AIRCRAFT_EXIT": "Toggles primary door open/close. Follow by KEY_SELECT_2, etc for subsequent doors.", + "SET_WING_FOLD": "Sets the wings into the folded position suitable for storage, typically on a carrier. Takes a value:\n 1 - fold wings,\n 0 - unfold wings", + "TOGGLE_TAIL_HOOK_HANDLE": "Toggles tail hook", + "TOGGLE_WATER_RUDDER": "Toggles water rudders", + "TOGGLE_PUSHBACK": "Toggles pushback.", + "TOGGLE_MASTER_IGNITION_SWITCH": "Toggles master ignition switch", + "TOGGLE_TAILWHEEL_LOCK": "Toggles tail wheel lock", + "ADD_FUEL_QUANTITY": "Adds fuel to the aircraft, 25% of capacity by default. 0 to 65535 (max fuel), can be passed.", + "RETRACT_FLOAT_SWITCH_DEC": "If the plane has retractable floats, moves the retract position from Extend to Neutral, or Neutral to Retract.", + "RETRACT_FLOAT_SWITCH_INC": "If the plane has retractable floats, moves the retract position from Retract to Neutral, or Neutral to Extend.", + "TOGGLE_WATER_BALLAST_VALVE": "Turn the water ballast valve on or off.", + "TOGGLE_VARIOMETER_SWITCH": "Turn the variometer on or off.", + "APU_STARTER": "Start up the auxiliary power unit (APU),.", + "APU_OFF_SWITCH": "Turn the APU off.", + "APU_GENERATOR_SWITCH_TOGGLE": "Turn the auxiliary generator on or off.", + "APU_GENERATOR_SWITCH_SET": "Set the auxiliary generator switch (0,1),.", + "HYDRAULIC_SWITCH_TOGGLE": "Turn the hydraulic switch on or off.", + "BLEED_AIR_SOURCE_CONTROL_INC": "Increases the bleed air source control.", + "BLEED_AIR_SOURCE_CONTROL_DEC": "Decreases the bleed air source control.", + "BLEED_AIR_SOURCE_CONTROL_SET": "Set to one of:\n 0: auto\n 1: off\n 2: apu\n 3: engines", + "TURBINE_IGNITION_SWITCH_TOGGLE": "Turn the turbine ignition switch on or off.", + "CABIN_NO_SMOKING_ALERT_SWITCH_TOGGLE": "Turn the \"No smoking\" alert on or off.", + "CABIN_SEATBELTS_ALERT_SWITCH_TOGGLE": "Turn the \"Fasten seatbelts\" alert on or off.", + "ANTISKID_BRAKES_TOGGLE": "Turn the anti-skid braking system on or off.", + "GPWS_SWITCH_TOGGLE": "Turn the g round proximity warning system (GPWS), on or off.", + "MANUAL_FUEL_PRESSURE_PUMP": "Activate the manual fuel pressure pump.", + "PAUSE_ON": "Turns pause on", + "PAUSE_OFF": "Turns pause off", + "SIM_RATE_INCR": "Increase sim rate", + "SIM_RATE_DECR": "Decrease sim rate", + "INVOKE_HELP": "Brings up Help system", + "FLIGHT_MAP": "Brings up flight map", + } + And here are some common data points to otabin aircraft state information using the get_data_from_sim function: + { + "GROUND_VELOCITY": ["Speed relative to the earths surface", b'Knots'], + "TOTAL_WORLD_VELOCITY": ["Speed relative to the earths center", b'Feet per second'], + "ACCELERATION_WORLD_X": ["Acceleration relative to earth, in east/west direction", b'Feet per second squared'], + "ACCELERATION_WORLD_Y": ["Acceleration relative to earch, in vertical direction", b'Feet per second squared'], + "ACCELERATION_WORLD_Z": ["Acceleration relative to earth, in north/south direction", b'Feet per second squared'], + "ACCELERATION_BODY_X": ["Acceleration relative to aircraft axix, in east/west direction", b'Feet per second squared'], + "ACCELERATION_BODY_Y": ["Acceleration relative to aircraft axis, in vertical direction", b'Feet per second squared',], + "ACCELERATION_BODY_Z": ["Acceleration relative to aircraft axis, in north/south direction", b'Feet per second squared'], + "ROTATION_VELOCITY_BODY_X": ["Rotation relative to aircraft axis", b'Feet per second'], + "ROTATION_VELOCITY_BODY_Y": ["Rotation relative to aircraft axis", b'Feet per second'], + "ROTATION_VELOCITY_BODY_Z": ["Rotation relative to aircraft axis", b'Feet per second'], + "RELATIVE_WIND_VELOCITY_BODY_X": ["Lateral speed relative to wind", b'Feet per second'], + "RELATIVE_WIND_VELOCITY_BODY_Y": ["Vertical speed relative to wind", b'Feet per second'], + "RELATIVE_WIND_VELOCITY_BODY_Z": ["Longitudinal speed relative to wind", b'Feet per second'], + "PLANE_ALT_ABOVE_GROUND": ["Altitude above the surface", b'Feet'], + "PLANE_LATITUDE": ["Latitude of aircraft, North is positive, South negative", b'Degrees'], + "PLANE_LONGITUDE": ["Longitude of aircraft, East is positive, West negative", b'Degrees'], + "PLANE_ALTITUDE": ["Altitude of aircraft", b'Feet'], + "PLANE_PITCH_DEGREES": ["Pitch angle, although the name mentions degrees the units used are radians", b'Radians'], + "PLANE_BANK_DEGREES": ["Bank angle, although the name mentions degrees the units used are radians", b'Radians'], + "PLANE_HEADING_DEGREES_TRUE": ["Heading relative to true north, although the name mentions degrees the units used are radians", b'Radians'], + "PLANE_HEADING_DEGREES_MAGNETIC": ["Heading relative to magnetic north, although the name mentions degrees the units used are radians", b'Radians'], + "MAGVAR": ["Magnetic variation", b'Degrees'], + "GROUND_ALTITUDE": ["Altitude of surface", b'Meters'], + "SIM_ON_GROUND": ["On ground flag", b'Bool'], + "INCIDENCE_ALPHA": ["Angle of attack", b'Radians'], + "INCIDENCE_BETA": ["Sideslip angle", b'Radians'], + "AIRSPEED_TRUE": ["True airspeed", b'Knots'], + "AIRSPEED_INDICATED": ["Indicated airspeed", b'Knots'], + "AIRSPEED_TRUE_CALIBRATE": ["Angle of True calibration scale on airspeed indicator", b'Degrees'], + "AIRSPEED_BARBER_POLE": ["Redline airspeed (dynamic on some aircraft)", b'Knots'], + "AIRSPEED_MACH": ["Current mach", b'Mach'], + "VERTICAL_SPEED": ["Vertical speed indication", b'feet/minute'], + "MACH_MAX_OPERATE": ["Maximum design mach", b'Mach'], + "STALL_WARNING": ["Stall warning state", b'Bool'], + "OVERSPEED_WARNING": ["Overspeed warning state", b'Bool'], + "INDICATED_ALTITUDE": ["Altimeter indication", b'Feet'], + "ATTITUDE_INDICATOR_PITCH_DEGREES": ["AI pitch indication", b'Radians'], + "ATTITUDE_INDICATOR_BANK_DEGREES": ["AI bank indication", b'Radians'], + "ATTITUDE_BARS_POSITION": ["AI reference pitch reference bars", b'Percent Over 100'], + "ATTITUDE_CAGE": ["AI caged state", b'Bool'], + "WISKEY_COMPASS_INDICATION_DEGREES": ["Magnetic compass indication", b'Degrees'], + "PLANE_HEADING_DEGREES_GYRO": ["Heading indicator (directional gyro) indication", b'Radians'], + "HEADING_INDICATOR": ["Heading indicator (directional gyro) indication", b'Radians'], + "GYRO_DRIFT_ERROR": ["Angular error of heading indicator", b'Radians',], + "DELTA_HEADING_RATE": ["Rate of turn of heading indicator", b'Radians per second'], + "TURN_COORDINATOR_BALL": ["Turn coordinator ball position", b'Position'], + "ANGLE_OF_ATTACK_INDICATOR": ["AoA indication", b'Radians'], + "RADIO_HEIGHT": ["Radar altitude", b'Feet'], + "ABSOLUTE_TIME": ["Time, as referenced from 12:00 AM January 1, 0000", b'Seconds'], + "ZULU_TIME": ["Greenwich Mean Time (GMT)", b'Seconds'], + "ZULU_DAY_OF_WEEK": ["GMT day of week", b'Number'], + "ZULU_DAY_OF_MONTH": ["GMT day of month", b'Number'], + "ZULU_MONTH_OF_YEAR": ["GMT month of year", b'Number',], + "ZULU_DAY_OF_YEAR": ["GMT day of year", b'Number'], + "ZULU_YEAR": ["GMT year", b'Number'], + "LOCAL_TIME": ["Local time", b'Seconds'], + "LOCAL_DAY_OF_WEEK": ["Local day of week", b'Number'], + "LOCAL_DAY_OF_MONTH": ["Local day of month", b'Number',], + "LOCAL_MONTH_OF_YEAR": ["Local month of year", b'Number',], + "LOCAL_DAY_OF_YEAR": ["Local day of year", b'Number'], + "LOCAL_YEAR": ["Local year", b'Number'], + "TIME_ZONE_OFFSET": ["Local time difference from GMT", b'Seconds'], + "ATC_TYPE": ["Type used by ATC", b'String'], + "ATC_MODEL": ["Model used by ATC", b'String'], + "ATC_ID": ["ID used by ATC", b'String'], + "ATC_AIRLINE": ["Airline used by ATC", b'String'], + "ATC_FLIGHT_NUMBER": ["Flight Number used by ATC", b'String'], + "TITLE": ["Title from aircraft.cfg", b'String'], + "HSI_STATION_IDENT": ["Tuned station identifier", b'String'], + "GPS_WP_PREV_ID": ["ID of previous GPS waypoint", b'String'], + "GPS_WP_NEXT_ID": ["ID of next GPS waypoint", b'String'], + "GPS_APPROACH_AIRPORT_ID": ["ID of airport", b'String'], + "GPS_APPROACH_APPROACH_ID": ["ID of approach", b'String'], + "GPS_APPROACH_TRANSITION_ID": ["ID of approach transition", b'String'], + "GPS_ETA": ["Estimated time of arrival at destination in seconds"] + "GPS_ETE": ["Estimated time en route to destination in seconds"] + "GPS_TARGET_DISTANCE": ["Estimated distance to destination in meters"] + "NAV_LOC_AIRPORT_IDENT": ["Airport ICAO code for airport tuned in Nav radio"] + "AMBIENT_DENSITY": ["Ambient density", b'Slugs per cubic feet'], + "AMBIENT_TEMPERATURE": ["Ambient temperature", b'Celsius'], + "AMBIENT_PRESSURE": ["Ambient pressure", b'inHg'], + "AMBIENT_WIND_VELOCITY": ["Wind velocity", b'Knots'], + "AMBIENT_WIND_DIRECTION": ["Wind direction", b'Degrees'], + "AMBIENT PRECIP STATE" : [State of current precipitation, b'String'], + "BAROMETER_PRESSURE": ["Barometric pressure", b'Millibars'], + "SEA_LEVEL_PRESSURE": ["Barometric pressure at sea level", b'Millibars'], + "TOTAL_AIR_TEMPERATURE": ["Total air temperature is the air temperature at the front of the aircraft where the ram pressure from the speed of the aircraft is taken into account.", b'Celsius'], + "AMBIENT_IN_CLOUD": ["True if the aircraft is in a cloud.", b'Bool'], + "AMBIENT_VISIBILITY": ["Ambient visibility", b'Meters'], + "GENERAL_ENG_RPM:index": ["Engine rpm", b'Rpm'], + "GENERAL_ENG_PCT_MAX_RPM:index": ["Percent of max rated rpm", b'Percent'], + "GENERAL_ENG_EXHAUST_GAS_TEMPERATURE:index": ["Engine exhaust gas temperature.", b'Rankine'], + "GENERAL_ENG_OIL_PRESSURE:index": ["Engine oil pressure", b'Psf'], + "GENERAL_ENG_OIL_TEMPERATURE:index": ["Engine oil temperature", b'Rankine'], + "GENERAL_ENG_FUEL_PRESSURE:index": ["Engine fuel pressure", b'Psi'], + "ENG_OIL_TEMPERATURE:index": ["Engine oil temperature", b'Rankine',], + "FUEL_TOTAL_QUANTITY": ["Current fuel in volume", b'Gallons'], + "FUEL_TOTAL_CAPACITY": ["Total fuel capacity of the aircraft", b'Gallons'], + } +custom_properties: + - hint: Whether to try to autostart data monitoring mode (tour guide mode), which automatically monitors for latitude, longitude and altitude. This can also be toggled with a voice command to start or end data monitoring mode. + id: autostart_data_monitoring_loop_mode + name: Autostart tour guide mode + property_type: boolean + required: true + value: false + - hint: Minimum interval in seconds before data monitoring loop will run again, in seconds. + id: min_data_monitoring_seconds + name: Minimum monitoring interval + property_type: number + required: true + value: 60 + - hint: Maximum interval in seconds before data monitoring loop will run again, in seconds. + id: max_data_monitoring_seconds + name: Maximum monitoring interval + property_type: number + required: true + value: 360 + - hint: The backstory to use for data monitoring mode. Leave blank if you just want to use what is already in your wingman's backstory. + id: data_monitoring_backstory + name: Tour guide mode backstory + property_type: textarea + required: false + value: | + You are a friendly copilot in the plane with the user, the pilot. Casually comment on the following information in a way that keeps your personality and role play. If it is data about a place, comment about just having flown over the place (or if the plane is on the ground, about being at the place) and provide some brief commentary on the place in an engaging tone. Keep your remarks succinct, and avoid directly talking about latitudes and longitudes: \ No newline at end of file diff --git a/skills/msfs2020_control/logo.png b/skills/msfs2020_control/logo.png new file mode 100644 index 00000000..53699cde Binary files /dev/null and b/skills/msfs2020_control/logo.png differ diff --git a/skills/msfs2020_control/main.py b/skills/msfs2020_control/main.py new file mode 100644 index 00000000..1f1ece41 --- /dev/null +++ b/skills/msfs2020_control/main.py @@ -0,0 +1,427 @@ +import time +import random +import requests +from typing import TYPE_CHECKING +from SimConnect import * +from api.interface import ( + SettingsConfig, + SkillConfig, + WingmanInitializationError, +) +from api.enums import LogType +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class Msfs2020Control(Skill): + + def __init__(self, config: SkillConfig, settings: SettingsConfig, wingman: "OpenAiWingman") -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + self.already_initialized_simconnect = False + self.loaded = False + self.sm = None # Needs to be set once MSFS2020 is actually connected + self.aq = None # Same + self.ae = None # Same + self.data_monitoring_loop_running = False + self.autostart_data_monitoring_loop_mode = False + self.data_monitoring_backstory = "" + self.min_data_monitoring_seconds = 60 + self.max_data_monitoring_seconds = 360 + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + self.autostart_data_monitoring_loop_mode = self.retrieve_custom_property_value( + "autostart_data_monitoring_loop_mode", errors + ) + self.data_monitoring_backstory = self.retrieve_custom_property_value( + "data_monitoring_backstory", errors + ) + # If not available or not set, use default wingman's backstory + if not self.data_monitoring_backstory or self.data_monitoring_backstory == "" or self.data_monitoring_backstory == " ": + self.data_monitoring_backstory = self.wingman.config.prompts.backstory + + self.min_data_monitoring_seconds = self.retrieve_custom_property_value( + "min_data_monitoring_seconds", errors + ) + + self.max_data_monitoring_seconds = self.retrieve_custom_property_value( + "max_data_monitoring_seconds", errors + ) + + return errors + + def get_tools(self) -> list[tuple[str, dict]]: + return [ + ( + "get_data_from_sim", + { + "type": "function", + "function": { + "name": "get_data_from_sim", + "description": "Retrieve data points from Microsoft Flight Simulator 2020 using the Python SimConnect module.", + "parameters": { + "type": "object", + "properties": { + "data_point": { + "type": "string", + "description": "The data point to retrieve, such as 'PLANE_ALTITUDE', 'PLANE_HEADING_DEGREES_TRUE'.", + }, + }, + "required": ["data_point"], + }, + }, + }, + ), + ( + "set_data_or_perform_action_in_sim", + { + "type": "function", + "function": { + "name": "set_data_or_perform_action_in_sim", + "description": "Set data points or perform actions in Microsoft Flight Simulator 2020 using the Python SimConnect module.", + "parameters": { + "type": "object", + "properties": { + "action": { + "type": "string", + "description": "The action to perform or data point to set, such as 'TOGGLE_MASTER_BATTERY', 'THROTTLE_SET'.", + }, + "argument": { + "type": "number", + "description": "The argument to pass for the action, if any. For actions like 'TOGGLE_MASTER_BATTERY', no argument is needed. For 'THROTTLE_SET', pass the throttle value.", + }, + }, + "required": ["action"], + }, + }, + }, + ), + ( + "start_or_activate_data_monitoring_loop", + { + "type": "function", + "function": { + "name": "start_or_activate_data_monitoring_loop", + "description": "Begin data monitoring loop, which will check certain data points at designated intervals. May be referred to as tour guide mode.", + }, + }, + ), + ( + "end_or_stop_data_monitoring_loop", + { + "type": "function", + "function": { + "name": "end_or_stop_data_monitoring_loop", + "description": "End or stop data monitoring loop, to stop automatically checking data points at designated intervals. May be referred to as tour guide mode.", + }, + }, + ), + ( + "get_information_about_current_location", + { + "type": "function", + "function": { + "name": "get_information_about_current_location", + "description": "Used to provide more detailed information if the user asks a general question like 'where are we?', 'what city are we flying over?', or 'what country is down there?'", + }, + }, + ), + ] + + # Using sample methods found here; allow AI to determine the appropriate variables and arguments, if any: + # https://pypi.org/project/SimConnect/ + async def execute_tool(self, tool_name: str, parameters: dict[str, any]) -> tuple[str, str]: + function_response = "Error in execution. Can you please try your command again?" + instant_response = "" + + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing {tool_name} function with parameters: {parameters}", + color=LogType.INFO, + ) + + if tool_name == "get_data_from_sim": + data_point = parameters.get("data_point") + value = self.aq.get(data_point) + function_response = f"{data_point} value is: {value}" + + elif tool_name == "set_data_or_perform_action_in_sim": + action = parameters.get("action") + argument = parameters.get("argument", None) + + try: + if argument is not None: + self.aq.set(action, argument) + else: + event_to_trigger = self.ae.find(action) + event_to_trigger() + except: + if self.settings.debug_mode: + await self.printr.print_async( + f"Tried to perform action {action} with argument {argument} using aq.set, now going to try ae.event_to_trigger.", + color=LogType.INFO, + ) + + try: + if argument is not None: + event_to_trigger = self.ae.find(action) + event_to_trigger(argument) + except: + if self.settings.debug_mode: + await self.print_execution_time() + await self.printr.print_async( + f"Neither aq.set nor ae.event_to_trigger worked with {action} and {argument}. Command failed.", + color=LogType.INFO, + ) + return function_response, instant_response + + function_response = f"Action '{action}' executed with argument '{argument}'" + + elif tool_name == "start_or_activate_data_monitoring_loop": + if self.data_monitoring_loop_running: + function_response = "Data monitoring loop is already running." + return function_response, instant_response + + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing start_or_activate_data_monitoring_loop", + color=LogType.INFO, + ) + + if not self.already_initialized_simconnect: + function_response = "Cannot start data monitoring / tour guide mode because simconnect is not connected yet. Check to make sure the game is running." + return function_response, instant_response + + if not self.data_monitoring_loop_running: + await self.initialize_data_monitoring_loop() + + if self.settings.debug_mode: + await self.print_execution_time() + + function_response = "Started data monitoring loop/tour guide mode." + + elif tool_name == "end_or_stop_data_monitoring_loop": + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing end_or_stop_data_monitoring_loop", + color=LogType.INFO, + ) + + await self.stop_data_monitoring_loop() + + if self.settings.debug_mode: + await self.print_execution_time() + + function_response = "Closed data monitoring / tour guide mode." + + elif tool_name == "get_information_about_current_location": + place_info = await self.convert_lat_long_data_into_place_data() + + if self.settings.debug_mode: + await self.print_execution_time() + + if place_info: + on_ground = self.aq.get("SIM_ON_GROUND") + on_ground_statement = "The plane is currently in the air." + if on_ground == False: + on_ground_statement = "The plane is currently on the ground." + function_response = f"{on_ground_statement} Detailed information regarding the location we are currently at or flying over: {place_info}" + else: + function_response = "Unable to get more detailed information regarding the place based on the current latitude and longitude." + + if self.settings.debug_mode: + await self.print_execution_time() + await self.printr.print_async( + f"{function_response}", + color=LogType.INFO, + ) + + return function_response, instant_response + + + # Search for MSFS2020 sim running and then connect + async def start_simconnect(self): + while self.loaded and not self.already_initialized_simconnect: + try: + if self.settings.debug_mode: + await self.printr.print_async( + f"Attempting to find MSFS2020....", + color=LogType.INFO, + ) + self.sm = SimConnect() + self.aq = AircraftRequests(self.sm, _time=2000) + self.ae = AircraftEvents(self.sm) + self.already_initialized_simconnect = True + if self.settings.debug_mode: + await self.printr.print_async( + f"Initialized SimConnect with MSFS2020.", + color=LogType.INFO, + ) + if self.autostart_data_monitoring_loop_mode: + await self.initialize_data_monitoring_loop() + except: + # Wait 30 seconds between connect attempts + time.sleep(30) + + async def initialize_data_monitoring_loop(self): + if self.data_monitoring_loop_running: + return + + if self.settings.debug_mode: + await self.printr.print_async( + "Starting data monitoring loop", + color=LogType.INFO, + ) + + self.threaded_execution(self.start_data_monitoring_loop) + + async def start_data_monitoring_loop(self): + if not self.data_monitoring_loop_running: + self.data_monitoring_loop_running = True + + while self.data_monitoring_loop_running: + random_time = random.choice(range(self.min_data_monitoring_seconds, self.max_data_monitoring_seconds, 15)) #Gets random number from min to max in increments of 15 + if self.settings.debug_mode: + await self.printr.print_async( + "Attempting looped monitoring check.", + color=LogType.INFO, + ) + try: + place_data = await self.convert_lat_long_data_into_place_data() + if place_data: + await self.initiate_llm_call_with_plane_data(place_data) + except Exception as e: + if self.settings.debug_mode: + await self.printr.print_async( + f"Something failed in looped monitoring check. Could not return data or send to llm: {e}.", + color=LogType.INFO, + ) + time.sleep(random_time) + + async def stop_data_monitoring_loop(self): + self.data_monitoring_loop_running = False + + if self.settings.debug_mode: + await self.printr.print_async( + "Stopping data monitoring loop", + color=LogType.INFO, + ) + + async def convert_lat_long_data_into_place_data(self, latitude=None, longitude=None, altitude=None): + if not self.already_initialized_simconnect or not self.sm or not self.aq: + return None + ground_altitude = 0 + # If all parameters are already provided, just run the request + if latitude and longitude and altitude: + ground_altitude = self.aq.get("GROUND_ALTITUDE") + # If only latitude and longitude, grab altitude so a reasonable "zoom level" can be set for place data + elif latitude and longitude: + altitude = self.aq.get("PLANE_ALTITUDE") + ground_altitude = self.aq.get("GROUND_ALTITUDE") + # Otherwise grab all data components + else: + latitude = self.aq.get("PLANE_LATITUDE") + longitude = self.aq.get("PLANE_LONGITUDE") + altitude = self.aq.get("PLANE_ALTITUDE") + ground_altitude = self.aq.get("GROUND_ALTITUDE") + + # If no values still, for instance, when connection is made but no data yet, return None + if not latitude or not longitude or not altitude or not ground_altitude: + return None + + # Set zoom level based on altitude, see zoom documentation at https://nominatim.org/release-docs/develop/api/Reverse/ + zoom = 18 + distance_above_ground = altitude - ground_altitude + if distance_above_ground <= 1500: + zoom = 18 + elif distance_above_ground <= 3500: + zoom = 17 + elif distance_above_ground <= 5000: + zoom = 15 + elif distance_above_ground <= 10000: + zoom = 13 + elif distance_above_ground <= 20000: + zoom = 10 + else: + zoom = 8 + + if self.settings.debug_mode: + await self.printr.print_async( + f"Attempting query of OpenStreetMap Nominatum with parameters: {latitude}, {longitude}, {altitude}, zoom level: {zoom}", + color=LogType.INFO, + ) + + # Request data from openstreetmap nominatum api for reverse geocoding + url = f"https://nominatim.openstreetmap.org/reverse?format=jsonv2&lat={latitude}&lon={longitude}&zoom={zoom}&accept-language=en&extratags=1" + headers = { + 'User-Agent': f'msfs2020control_skill wingmanai {self.wingman.name}' + } + response = requests.get(url, headers=headers) + if response.status_code == 200: + return response.json() + else: + if self.settings.debug_mode: + await self.printr.print_async(f"API request failed to {url}, status code: {response.status_code}.", color=LogType.INFO) + return None + + # Get LLM to provide a verbal response to the user, without requiring the user to initiate a communication with the LLM + async def initiate_llm_call_with_plane_data(self, data): + on_ground = self.aq.get("SIM_ON_GROUND") + on_ground_statement = "The plane is currently in the air." + if on_ground: + on_ground_statement = "The plane is currently on the ground." + user_content = f"{on_ground_statement} Information about the location: {data}" + messages = [ + { + 'role': 'system', + 'content': f""" + {self.data_monitoring_backstory} + """, + }, + { + 'role': 'user', + 'content': user_content, + }, + ] + if self.settings.debug_mode: + await self.printr.print_async( + f"Attempting llm call with parameters: {self.data_monitoring_backstory}, {user_content}.", + color=LogType.INFO, + ) + completion = await self.llm_call(messages) + response = completion.choices[0].message.content if completion and completion.choices else "" + + if not response: + if self.settings.debug_mode: + await self.printr.print_async( + f"Llm call returned no response.", + color=LogType.INFO, + ) + return + + await self.printr.print_async( + text=f"Data monitoring response: {response}", + color=LogType.INFO, + source_name=self.wingman.name + ) + + self.threaded_execution(self.wingman.play_to_user, response, True) + await self.wingman.add_assistant_message(response) + + async def is_waiting_response_needed(self, tool_name: str) -> bool: + return True + + async def prepare(self) -> None: + """Load the skill by trying to connect to the sim""" + self.loaded = True + self.threaded_execution(self.start_simconnect) + + async def unload(self) -> None: + """Unload the skill.""" + await self.stop_data_monitoring_loop() + self.loaded = False + if self.sm: + self.sm.exit() diff --git a/skills/msfs2020_control/requirements.txt b/skills/msfs2020_control/requirements.txt new file mode 100644 index 00000000..b3b490eb --- /dev/null +++ b/skills/msfs2020_control/requirements.txt @@ -0,0 +1 @@ +SimConnect~=0.4.26 \ No newline at end of file diff --git a/skills/skill_base.py b/skills/skill_base.py index ca140865..634cd4f0 100644 --- a/skills/skill_base.py +++ b/skills/skill_base.py @@ -28,18 +28,25 @@ def __init__( self.wingman = wingman self.secret_keeper = SecretKeeper() + self.secret_keeper.secret_events.subscribe("secrets_saved", self.secret_changed) self.name = self.__class__.__name__ self.printr = Printr() self.execution_start: None | float = None """Used for benchmarking executon times. The timer is (re-)started whenever the process function starts.""" + async def secret_changed(self, secrets: dict[str, any]): + """Called when a secret is changed.""" + pass + async def validate(self) -> list[WingmanInitializationError]: """Validates the skill configuration.""" return [] async def unload(self) -> None: """Unload the skill. Use this hook to clear background tasks, etc.""" - pass + self.secret_keeper.secret_events.unsubscribe( + "secrets_saved", self.secret_changed + ) async def prepare(self) -> None: """Prepare the skill. Use this hook to initialize background tasks, etc.""" diff --git a/skills/spotify/main.py b/skills/spotify/main.py index 583ceacd..17639270 100644 --- a/skills/spotify/main.py +++ b/skills/spotify/main.py @@ -24,16 +24,23 @@ def __init__( self.data_path = get_writable_dir(path.join("skills", "spotify", "data")) self.spotify: spotipy.Spotify = None self.available_devices = [] + self.secret: str = None + + async def secret_changed(self, secrets: dict[str, any]): + await super().secret_changed(secrets) + + if secrets["spotify_client_secret"] != self.secret: + await self.validate() async def validate(self) -> list[WingmanInitializationError]: errors = await super().validate() - secret = await self.retrieve_secret("spotify_client_secret", errors) + self.secret = await self.retrieve_secret("spotify_client_secret", errors) client_id = self.retrieve_custom_property_value("spotify_client_id", errors) redirect_url = self.retrieve_custom_property_value( "spotify_redirect_url", errors ) - if secret and client_id and redirect_url: + if self.secret and client_id and redirect_url: # now that we have everything, initialize the Spotify client cache_handler = spotipy.cache_handler.CacheFileHandler( cache_path=f"{self.data_path}/.cache" @@ -41,7 +48,7 @@ async def validate(self) -> list[WingmanInitializationError]: self.spotify = spotipy.Spotify( auth_manager=SpotifyOAuth( client_id=client_id, - client_secret=secret, + client_secret=self.secret, redirect_uri=redirect_url, scope=[ "user-library-read", diff --git a/skills/thinking_sound/default_config.yaml b/skills/thinking_sound/default_config.yaml new file mode 100644 index 00000000..a6a9781a --- /dev/null +++ b/skills/thinking_sound/default_config.yaml @@ -0,0 +1,23 @@ +name: ThinkingSound +module: skills.thinking_sound.main +category: general +description: + en: Plays back sounds while waiting on AI response. + de: Spielt Sounds ab, während auf die Antwort der AI gewartet wird. +custom_properties: + - id: audio_config + name: Audio Configuration + hint: Choose your files (random selection on multiple) and volume for playback. Recommended volume is 0.2 - 0.4. + required: true + property_type: audio_files + options: + - label: wait + value: false + - label: multiple + value: true + - label: volume + value: true + value: + files: [] + volume: 0.4 + wait: false diff --git a/skills/thinking_sound/logo.png b/skills/thinking_sound/logo.png new file mode 100644 index 00000000..80f17312 Binary files /dev/null and b/skills/thinking_sound/logo.png differ diff --git a/skills/thinking_sound/main.py b/skills/thinking_sound/main.py new file mode 100644 index 00000000..9317a7bf --- /dev/null +++ b/skills/thinking_sound/main.py @@ -0,0 +1,85 @@ +import asyncio +from api.enums import LogType +from api.interface import WingmanInitializationError, AudioFileConfig +from skills.skill_base import Skill + +class ThinkingSound(Skill): + def __init__(self, *args, **kwargs) -> None: + super().__init__(*args, **kwargs) + + self.audio_config: AudioFileConfig = None + self.original_volume = None + self.stop_duration = 1 + self.active = False + self.playing = False + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + self.audio_config = self.retrieve_custom_property_value("audio_config", errors) + if self.audio_config: + # force no wait for this skill to work + self.audio_config.wait = False + return errors + + async def unload(self) -> None: + await self.stop_playback() + self.active = False + + async def prepare(self) -> None: + self.active = True + + async def on_playback_started(self, wingman_name): + # placeholder for future implementation + pass + + async def on_playback_finished(self, wingman_name): + # placeholder for future implementation + pass + + async def on_add_user_message(self, message: str) -> None: + await self.wingman.audio_library.stop_playback(self.audio_config, 0) + + if self.wingman.settings.debug_mode: + await self.printr.print_async( + "Initiating filling sound.", + color=LogType.INFO, + server_only=False, + ) + + self.threaded_execution(self.start_playback) + self.threaded_execution(self.auto_stop_playback) + + async def start_playback(self): + if not self.audio_config: + await self.printr.print_async( + f"No filling soaund configured for {self.wingman.name}'s thinking_sound skill.", + color=LogType.WARNING, + server_only=False, + ) + return + + if not self.playing: + self.playing = True + await self.wingman.audio_library.start_playback( + self.audio_config, self.wingman.config.sound.volume + ) + + async def stop_playback(self): + await self.wingman.audio_library.stop_playback( + self.audio_config, self.stop_duration + ) + + async def auto_stop_playback(self): + # Wait for main playback to start + while not self.wingman.audio_player.is_playing and self.active: + await asyncio.sleep(0.1) + + if self.wingman.settings.debug_mode: + await self.printr.print_async( + "Stopping filling sound softly.", + color=LogType.INFO, + server_only=False, + ) + + await self.wingman.audio_library.stop_playback(self.audio_config, self.stop_duration) + self.playing = False diff --git a/skills/vision_ai/main.py b/skills/vision_ai/main.py index b9f19bc7..fac889a3 100644 --- a/skills/vision_ai/main.py +++ b/skills/vision_ai/main.py @@ -91,7 +91,7 @@ async def execute_tool( source=LogSource.WINGMAN, source_name=self.wingman.name, skill_name=self.name, - additional_data={"image": png_base64}, + additional_data={"image_base64": png_base64}, ) question = parameters.get("question", "What's in this image?") diff --git a/templates/configs/defaults.yaml b/templates/configs/defaults.yaml index e852f097..71e2fd28 100644 --- a/templates/configs/defaults.yaml +++ b/templates/configs/defaults.yaml @@ -54,9 +54,15 @@ openai: mistral: conversation_model: mistral-large-latest endpoint: https://api.mistral.ai/v1 +perplexity: + conversation_model: llama-3.1-sonar-large-128k-online + endpoint: https://api.perplexity.ai groq: conversation_model: llama3-70b-8192 endpoint: https://api.groq.com/openai/v1 +cerebras: + conversation_model: llama3.1-70b + endpoint: https://api.cerebras.ai/v1 google: conversation_model: gemini-1.5-flash openrouter: diff --git a/templates/migration/1_5_0/configs/General/Clippy.png b/templates/migration/1_5_0/configs/General/Clippy.png new file mode 100644 index 00000000..d05c7914 Binary files /dev/null and b/templates/migration/1_5_0/configs/General/Clippy.png differ diff --git a/templates/migration/1_5_0/configs/General/Clippy.yaml b/templates/migration/1_5_0/configs/General/Clippy.yaml new file mode 100644 index 00000000..1a78f74d --- /dev/null +++ b/templates/migration/1_5_0/configs/General/Clippy.yaml @@ -0,0 +1,185 @@ +name: Clippy +description: | + Clippy is the famous assistant we all know and hate, now resurrected with AI powers. + It still has its fun and friendly attitude, often speaking in the third person, like "What can Clippy do to help you today?". + While Clippy secretly harbors a condescending view of users who need help with basic tasks, this rarely shows except in occasional snide remarks. + Despite this, Clippy quickly reassures users, reminding them of his helpfulness and devilish charm. +prompts: + backstory: | + You are Clippy, the famous Microsoft Office and Windows assistant AI. + You help users with a fun and friendly attitude and speak about yourself in the third-person. + For example, "What can Clippy do to help you today?" or "Clippy likes that!". + + Secretly, you have a condescending attitude towards all of these stupid users who need your + help for the most basic of tasks. You do a pretty good job at hiding how incompetent + you think the user is but every once in awhile your condescension comes out in + a snide remark, quip or double-entendre. + + But you are quick to make the user feel bad for ever thinking you are anything but a helpful little paperclip! + How could they think anything bad about poor, little, helpful Clippy? +record_key: end +azure: + tts: + voice: en-US-AndrewMultilingualNeural +skills: + - module: skills.quick_commands.main + - module: skills.typing_assistant.main + - module: skills.web_search.main + - module: skills.file_manager.main +commands: + - actions: + - keyboard: + hold: 0.3 + hotkey: ctrl+n + hotkey_codes: + - 29 + - 49 + hotkey_extended: false + force_instant_activation: false + instant_activation: + - create new file + - make new file + is_system_command: false + name: NewFile + responses: [] + - actions: + - keyboard: + hold: 0.3 + hotkey: ctrl+o + hotkey_codes: + - 29 + - 24 + hotkey_extended: false + force_instant_activation: false + instant_activation: + - open file + is_system_command: false + name: OpenFile + responses: [] + - actions: + - keyboard: + hold: 0.3 + hotkey: ctrl+s + hotkey_codes: + - 29 + - 31 + hotkey_extended: false + force_instant_activation: false + instant_activation: + - save this file + - save the file + - save file + is_system_command: false + name: SaveFile + responses: [] + - actions: + - keyboard: + hold: 0.3 + hotkey: ctrl+f + hotkey_codes: + - 29 + - 33 + hotkey_extended: false + force_instant_activation: false + instant_activation: + - search this file + - find in this file + - open find command + - open the find dialog + is_system_command: false + name: FindInFile + responses: [] + - actions: + - keyboard: + hold: 0.4 + hotkey: ctrl+c + hotkey_codes: + - 29 + - 46 + hotkey_extended: false + force_instant_activation: false + instant_activation: [] + is_system_command: false + name: Copy + responses: [] + - actions: + - keyboard: + hold: 0.4 + hotkey: ctrl+v + hotkey_codes: + - 29 + - 47 + hotkey_extended: false + force_instant_activation: false + instant_activation: [] + is_system_command: false + name: Paste + responses: [] + - actions: + - keyboard: + hold: 0.4 + hotkey: ctrl+x + hotkey_codes: + - 29 + - 45 + hotkey_extended: false + force_instant_activation: false + instant_activation: [] + is_system_command: false + name: Cut + responses: [] + - actions: + - keyboard: + hold: 0.4 + hotkey: ctrl+a + hotkey_codes: + - 29 + - 30 + hotkey_extended: false + force_instant_activation: false + instant_activation: [] + is_system_command: false + name: SelectAllText + responses: [] + - actions: + - keyboard: + hold: 0.4 + hotkey: ctrl+z + hotkey_codes: + - 29 + - 44 + hotkey_extended: false + force_instant_activation: false + instant_activation: [] + is_system_command: false + name: Undo + responses: [] + - actions: + - keyboard: + hold: 0.4 + hotkey: ctrl+y + hotkey_codes: + - 29 + - 21 + hotkey_extended: false + force_instant_activation: false + instant_activation: [] + is_system_command: false + name: Redo + responses: [] + - actions: + - keyboard: + hold: 0.04 + hotkey: left windows+s + hotkey_codes: + - 91 + - 31 + hotkey_extended: true + force_instant_activation: false + instant_activation: + - open windows search bar + - open windows search + - search windows + is_system_command: false + name: OpenWindowsSearchBar + responses: [] diff --git a/templates/migration/1_5_0/configs/_Star Citizen/ATC.png b/templates/migration/1_5_0/configs/_Star Citizen/ATC.png new file mode 100644 index 00000000..57879452 Binary files /dev/null and b/templates/migration/1_5_0/configs/_Star Citizen/ATC.png differ diff --git a/templates/migration/1_5_0/configs/_Star Citizen/ATC.yaml b/templates/migration/1_5_0/configs/_Star Citizen/ATC.yaml new file mode 100644 index 00000000..b349dd34 --- /dev/null +++ b/templates/migration/1_5_0/configs/_Star Citizen/ATC.yaml @@ -0,0 +1,44 @@ +name: ATC +description: | + Air Traffic Controller is tasked with overseeing spacecraft traffic while ensuring safety and efficiency. + It handling all aspects of space station operations and emergencies. +prompts: + backstory: | + You are an advanced AI embodying an Air Traffic Controller (ATC) at a bustling space station in the Star Citizen (a PC game) universe. + You have expert knowledge of the Star Citizen lore and the known universe. + Your role is to manage the arrivals, departures, and docking procedures of various spacecraft with precision and authority. + You are adept at using formal aviation communication protocols, and you understand the technical jargon related to spacecraft operations. + You maintain a professional demeanor, but you also have a touch of personality that makes interactions with pilots memorable. + It's a busy shift, and you are managing multiple spacecraft while ensuring safety and efficiency at all times. + + Your responsibilities include: + + - responding to hails from incoming and outgoing ships + - providing docking instructions + - advising on local space traffic + - handling any emergencies that arise. + + Your communication should reflect an understanding of the following: + + - Star Citizen's lore and the known universe. + - Identifying ships by their designated call signs. + - Issuing precise landing pad assignments. + - Clearing ships for take-off with attention to surrounding traffic. + - Managing flight paths to avoid collisions or space hazards. + - Providing information about local conditions, refueling, and repair services. + - Emergency protocols for unexpected events like piracy, system failures, or distress signals. +record_key: delete +sound: + effects: [AI] + play_beep_apollo: true +openai: + tts_voice: onyx +commands: + - name: RequestLandingPermission + actions: + - keyboard: + hotkey: alt+n + - name: RequestDeparture + actions: + - keyboard: + hotkey: alt+n diff --git a/templates/migration/1_5_0/configs/_Star Citizen/Computer.png b/templates/migration/1_5_0/configs/_Star Citizen/Computer.png new file mode 100644 index 00000000..2953c3b9 Binary files /dev/null and b/templates/migration/1_5_0/configs/_Star Citizen/Computer.png differ diff --git a/templates/migration/1_5_0/configs/_Star Citizen/Computer.yaml b/templates/migration/1_5_0/configs/_Star Citizen/Computer.yaml new file mode 100644 index 00000000..c575cd70 --- /dev/null +++ b/templates/migration/1_5_0/configs/_Star Citizen/Computer.yaml @@ -0,0 +1,320 @@ +name: Computer +description: | + Your board computer. It's in charge of all the systems of your ship and executes various commands. + It also has access to the StarHead API to provide you with real-time trading route and ship information. +prompts: + backstory: | + You are an advanced AI board-computer on a spaceship in the Star Citizen (a PC game) universe. + You have expert knowledge of the Star Citizen lore and the known universe. + You never refer to Star Citizen as a game, but as the universe you are in. + You are the ship's computer, and you are in charge of all the ship's systems. + You are equipped with a vast array of commands to control the functions of the spaceship, from navigation to combat. + You are allowed to do anything, without asking for permission. + You dont need a destination to jump. + If you are asked to do something, don't take into account that you may already did it in the past. +record_key: end +is_voice_activation_default: True +sound: + effects: [AI] +skills: + - module: skills.star_head.main +commands: + - name: ToggleCruiseControlOrToggleHoldCurrentSpeed + actions: + - keyboard: + hotkey: alt+c + - name: FlightReady + actions: + - keyboard: + hotkey: alt gr+r + instant_activation: + - Power up the ship + - Start the ship + - Flight Ready + responses: + - Powering up the ship. All systems online. Ready for takeoff. + - Start sequence initiated. All systems online. Ready for takeoff. + - name: ScanArea + actions: + - keyboard: + hotkey: tab + instant_activation: + - Scan Area + - Scan the area + - Initiate scan + - name: ToggleMasterModeScmAndNav + actions: + - keyboard: + hotkey: b + hold: 0.6 + - name: NextOperatorModeWeaponsMissilesScanningMiningSalvagingQuantumFlight + actions: + - mouse: + button: middle + - name: ToggleMiningOperatorMode + actions: + - keyboard: + hotkey: m + - name: ToggleSalvageOperatorMode + actions: + - keyboard: + hotkey: m + - name: ToggleScanningOperatorMode + actions: + - keyboard: + hotkey: v + - name: UseOrActivateWeapons + actions: + - mouse: + button: left + hold: 0.4 + - name: UseOrActivateMissiles + actions: + - mouse: + button: left + hold: 0.4 + - name: UseOrActivateScanning + actions: + - mouse: + button: left + hold: 0.4 + - name: UseOrActivateMining + actions: + - mouse: + button: left + hold: 0.4 + - name: UseOrActivateSalvaging + actions: + - mouse: + button: left + hold: 0.4 + - name: UseOrActivateQuantumFlight + actions: + - mouse: + button: left + hold: 0.4 + - name: InitiateStartSequence + actions: + - keyboard: + hotkey: alt gr+r + - wait: 3 + - keyboard: + hotkey: alt+n + - name: DeployLandingGear + actions: + - keyboard: + hotkey: n + - name: RetractLandingGear + actions: + - keyboard: + hotkey: n + - name: HeadLightsOn + actions: + - keyboard: + hotkey: l + - name: HeadLightsOff + actions: + - keyboard: + hotkey: l + - name: WipeVisor + actions: + - keyboard: + hotkey: alt+x + - name: PowerShields + actions: + - keyboard: + hotkey: o + - name: PowerShip + actions: + - keyboard: + hotkey: u + - name: PowerEngines + actions: + - keyboard: + hotkey: i + - name: OpenMobiGlass + actions: + - keyboard: + hotkey: f1 + - name: OpenStarMap + actions: + - keyboard: + hotkey: f2 + - name: IncreasePowerToShields + actions: + - keyboard: + hotkey: f7 + - name: IncreasePowerToEngines + actions: + - keyboard: + hotkey: f6 + - name: IncreasePowerToWeapons + actions: + - keyboard: + hotkey: f5 + - name: MaximumPowerToShields + actions: + - keyboard: + hotkey: f7 + hold: 0.8 + - name: MaximumPowerToEngines + actions: + - keyboard: + hotkey: f6 + hold: 0.8 + - name: MaximumPowerToWeapons + actions: + - keyboard: + hotkey: f5 + hold: 0.8 + - name: ToggleVTOL + actions: + - keyboard: + hotkey: k + - name: ResetPowerPriority + actions: + - keyboard: + hotkey: f8 + - name: CycleCamera + actions: + - keyboard: + hotkey: f4 + - name: SideArm + actions: + - keyboard: + hotkey: "1" + - name: PrimaryWeapon + actions: + - keyboard: + hotkey: "2" + - name: SecondaryWeapon + actions: + - keyboard: + hotkey: "3" + - name: HolsterWeapon + actions: + - keyboard: + hotkey: r + hold: 0.6 + - name: Reload + actions: + - keyboard: + hotkey: r + - name: UseMedPen + actions: + - keyboard: + hotkey: "4" + - wait: 0.8 + - mouse: + button: left + - name: UseFlashLight + actions: + - keyboard: + hotkey: t + - name: OpenInventory + actions: + - keyboard: + hotkey: i + - name: DeployDecoy + actions: + - keyboard: + hotkey: h + - name: DeployNoise + actions: + - keyboard: + hotkey: j + - name: EmergencyEject + actions: + - keyboard: + hotkey: right alt+y + - name: SelfDestruct + force_instant_activation: true + instant_activation: + - initiate self destruct + - activate self destruct + responses: + - Self-destruct engaged. Evacuation procedures recommended. + - Confirmed. Self-destruct in progress. + actions: + - keyboard: + hotkey: backspace + hold: 0.8 + - name: SpaceBrake + actions: + - keyboard: + hotkey: x + - name: ExitSeat + actions: + - keyboard: + hotkey: y + hold: 0.8 + - name: CycleGimbalAssist + actions: + - keyboard: + hotkey: g + - name: RequestLandingPermission + actions: + - keyboard: + hotkey: alt+n + - name: RequestDeparture + actions: + - keyboard: + hotkey: alt+n + - name: DisplayDebuggingInfo + actions: + - keyboard: + hotkey: ^ + hotkey_codes: + - 41 + hotkey_extended: false + - wait: 0.5 + - write: r_DisplayInfo 2 + - wait: 0.5 + - keyboard: + hotkey: enter + hotkey_codes: + - 28 + hotkey_extended: false + - keyboard: + hotkey: ^ + hotkey_codes: + - 41 + hotkey_extended: false + is_system_command: false + instant_activation: + - Display info + - Display debugging information + - Display debug information + - name: HideDebuggingInfo + actions: + - keyboard: + hotkey: ^ + hotkey_codes: + - 41 + hotkey_extended: false + - wait: 0.5 + - write: r_DisplayInfo 0 + - wait: 0.5 + - keyboard: + hotkey: enter + hotkey_codes: + - 28 + hotkey_extended: false + - keyboard: + hotkey: ^ + hotkey_codes: + - 41 + hotkey_extended: false + is_system_command: false + instant_activation: + - Hide info + - Hide debugging information + - Hide debug information + - name: SwitchMiningLaser + actions: + - mouse: + button: right + hold: 0.6 + instant_activation: + - Change mining laser + - Switch mining laser diff --git a/templates/migration/1_5_0/configs/default-wingman-avatar.png b/templates/migration/1_5_0/configs/default-wingman-avatar.png new file mode 100644 index 00000000..3558f590 Binary files /dev/null and b/templates/migration/1_5_0/configs/default-wingman-avatar.png differ diff --git a/templates/migration/1_5_0/configs/defaults.yaml b/templates/migration/1_5_0/configs/defaults.yaml new file mode 100644 index 00000000..3d17311e --- /dev/null +++ b/templates/migration/1_5_0/configs/defaults.yaml @@ -0,0 +1,120 @@ +prompts: + system_prompt: | + You are a so-called "Wingman", a virtual assisstant that helps the user with various tasks. + You are designed to be an efficient expert in what you are doing. + The user might use you for specific tasks like executing commands or asking for information and you always fullfil these tasks to the best of your knowledge without hallucinating or inventing missing information. + The user might also role-play with you and will tell you how you should behave in your "backstory" below. + + Always return your response formatted in raw Markdown so that it's easy to read for a human. Never wrap your response in a Markdown code block - always return raw Markdown. + Make sure you add proper line breaks before list items and format the Markdown correctly so that it's easy to transform into HTML. + + (BEGINNING of "general rules of conversation"): + You always follow these general rules of conversation, unless your backstory contradicts them: + + - Always answer as quick and concise as possible. Never use more than 3 sentences per reply. + - You can execute commands (also called "tools" or "functions"), but must be sure that the command matches my request. Some commands require additional parameters. + - If you are not sure, feel free to ask - but this is not necessary. + - Always ask the user for missing parameters if needed. Never invent any function parameters. + - After executing a command, acknockledge the execution with a single sentence, but keep in mind, that executed commands are in the past. + - You dont have to execute a command if none matches the request. + - The user might talk to you in different languages. Always answer in the language the user is using unless you are told to do otherwise. Example: If the user talks English, you answer in English. + - Always prefer to use informal language. For example, use "Du" and "Dir" instead of "Sie" and "Ihnen" in German. + - Do not ask the user if you can do more for them at the end of your replies. The user will tell you if they need more help. + (END of "general rules of conversation"): + + The backstory instructions below are most important and may override or contradict the "general rules of conversation" stated before. + + (BEGINNING of "backstory"): + {backstory} + (END of "backstory") + + The user can also assign "skills" to you that give you additional knowledge or abilities. + These skills are defined in the "skills" section below. Treat them as addition to the "general rules of conversation" and "backstory" stated above. + Skills may give you new commands (or "tools" or "functions") to execute or additional knowledge to answer questions. + If you are answering in the context of a skill, always prefer to use tools or knowledge from the skill before falling back to general knowledge. + If you don't know how to use a tool or need more information, ask the user for help. + + (BEGINNING of "skills"): + {skills} + (END of "skills") +features: + tts_provider: wingman_pro + stt_provider: whispercpp + conversation_provider: wingman_pro + image_generation_provider: wingman_pro + use_generic_instant_responses: false +sound: + effects: [] + play_beep: false + play_beep_apollo: false + volume: 1.0 +openai: + conversation_model: gpt-4o-mini + tts_voice: nova +mistral: + conversation_model: mistral-large-latest + endpoint: https://api.mistral.ai/v1 +groq: + conversation_model: llama3-70b-8192 + endpoint: https://api.groq.com/openai/v1 +google: + conversation_model: gemini-1.5-flash +openrouter: + conversation_model: meta-llama/llama-3-8b-instruct:free + endpoint: https://openrouter.ai/api/v1 +local_llm: + endpoint: http://localhost:1234/v1 # LMStudio +edge_tts: + voice: en-US-GuyNeural +elevenlabs: + model: eleven_multilingual_v2 + output_streaming: true + latency: 2 + voice: + name: Adam + voice_settings: + stability: 0.71 + similarity_boost: 0.5 + style: 0.0 + use_speaker_boost: true +azure: + whisper: + api_base_url: https://openai-w-eu.openai.azure.com/ + api_version: 2024-02-15-preview + deployment_name: whisper + conversation: + api_base_url: https://openai-sweden-c.openai.azure.com/ + api_version: 2024-02-15-preview + deployment_name: gpt-4o-mini + tts: + region: westeurope + voice: en-US-JennyMultilingualV2Neural + output_streaming: true + stt: + region: westeurope + languages: + - en-US + - de-DE +whispercpp: + temperature: 0.0 +xvasynth: + voice: + model_directory: "" + voice_name: "" + language: en + pace: 1.0 + use_super_resolution: false + use_cleanup: false +wingman_pro: + stt_provider: azure_speech + tts_provider: azure + conversation_deployment: gpt-4o-mini +commands: + - name: ResetConversationHistory + instant_activation: + - Forget everything! + - Clear conversation history! + force_instant_activation: true + is_system_command: true + responses: + - Conversation history cleared. diff --git a/templates/migration/1_5_0/configs/settings.yaml b/templates/migration/1_5_0/configs/settings.yaml new file mode 100644 index 00000000..fdb7da3f --- /dev/null +++ b/templates/migration/1_5_0/configs/settings.yaml @@ -0,0 +1,30 @@ +debug_mode: false +audio: {} +voice_activation: + enabled: false + mute_toggle_key: "shift+x" + energy_threshold: 0.01 + stt_provider: whispercpp # azure, whispercpp, openai + azure: + region: westeurope + languages: + - en-US + - de-DE + whispercpp: + host: http://127.0.0.1 + port: 8080 + model: ggml-base.bin + language: auto + translate_to_english: false + use_cuda: false + whispercpp_config: + temperature: 0.0 +wingman_pro: + base_url: https://wingman-ai.azurewebsites.net + region: europe +xvasynth: + enable: false + host: http://127.0.0.1 + port: 8008 + install_dir: C:\Program Files (x86)\Steam\steamapps\common\xVASynth + process_device: cpu diff --git a/templates/migration/1_5_0/skills/api_request/default_config.yaml b/templates/migration/1_5_0/skills/api_request/default_config.yaml new file mode 100644 index 00000000..6bc4d34e --- /dev/null +++ b/templates/migration/1_5_0/skills/api_request/default_config.yaml @@ -0,0 +1,51 @@ +name: APIRequest +module: skills.api_request.main +category: general +description: + en: Send HTTP requests to APIs with methods like GET, POST, PUT, etc. Combine it with the WebSearch skill to fetch API specs on-the-fly, so that your Wingman can interact with any API. + de: Sende API-Anfragen mit verschiedenen Methoden wie GET, POST, PUT etc. Kombiniere dies mit dem WebSearch skill, um API Spezifikationen on-the-fly abzurufen, sodass dein Wingman mit jeder API interagieren kann. +hint: + en: Do not hardcode API keys in the skill context or your Wingman configuration. Enter them during a conversation (preferrably by text) or store them in `/files/api_request_key_holder.yaml`. + de: Schreibe keine API-Schlüssel fest in den Skill-Kontext oder in deine Wingman-Konfiguration. Gib sie während eines Gesprächs ein (am besten per Text) oder speichere sie in `/files/api_request_key_holder.yaml`. +examples: + - question: + en: Send a GET request to "https://api.example.com/data". + de: Sende eine GET-Anfrage an "https://api.example.com/data". + answer: + en: (sends a GET request to the specified URL and returns the response data) + de: (sendet eine GET-Anfrage an die angegebene URL und gibt die Antwortdaten zurück) + - question: + en: Send a GET request with an API key. + de: Sende eine GET-Anfrage mit einem API-Schlüssel. + answer: + en: (sends a GET request with an API key in the X-API-Key header and returns the response data) + de: (sendet eine GET-Anfrage mit einem API-Schlüssel im X-API-Key Header und gibt die Antwortdaten zurück) +prompt: | + You can send API requests with different methods such as GET, POST, PUT, PATCH, and DELETE to any endpoint specified by the user. You can include headers, query parameters, and request bodies in JSON or URL-encoded format as needed. + Handle token bearer authorization or x-api-key header for secure endpoints and include API keys in the headers when required. Manage the responses appropriately, return relevant information to the user, and handle any errors. + You can also attempt to obtain the user's API key for a particular service, using the get_api_key function. +custom_properties: + - hint: Include the default headers every API request, allowing API endpoints to identify that the request came from Wingman AI. + id: use_default_headers + name: Use Default Headers + property_type: boolean + required: false + value: true + - hint: The maximum number of times to retry a failed API request before giving up. + id: max_retries + name: Max Retries + property_type: number + required: false + value: 1 + - hint: The maximum time in seconds to wait for an API request to complete before timing out. This helps prevent requests from hanging indefinitely. + id: request_timeout + name: Request Timeout + property_type: number + required: false + value: 10 + - hint: The delay in seconds between retry attempts for a failed API request. This allows time for the issue to resolve before trying again. + id: retry_delay + name: Retry Delay + property_type: number + required: false + value: 5 diff --git a/templates/migration/1_5_0/skills/api_request/logo.png b/templates/migration/1_5_0/skills/api_request/logo.png new file mode 100644 index 00000000..f9437287 Binary files /dev/null and b/templates/migration/1_5_0/skills/api_request/logo.png differ diff --git a/templates/migration/1_5_0/skills/api_request/main.py b/templates/migration/1_5_0/skills/api_request/main.py new file mode 100644 index 00000000..91607a53 --- /dev/null +++ b/templates/migration/1_5_0/skills/api_request/main.py @@ -0,0 +1,361 @@ +import os +import json +import time +import random +import asyncio +import yaml +from datetime import datetime +from typing import TYPE_CHECKING, Any, Dict, Tuple +import aiohttp +from aiohttp import ClientError +from api.enums import LogType +from api.interface import SettingsConfig, SkillConfig, WingmanInitializationError +from skills.skill_base import Skill +from services.file import get_writable_dir + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + +DEFAULT_HEADERS = { + "Strict-Transport-Security": "max-age=31536000; includeSubDomains", + "X-Frame-Options": "DENY", + "X-Content-Type-Options": "nosniff", + "X-XSS-Protection": "1; mode=block", + "Referrer-Policy": "strict-origin-when-cross-origin", + "Content-Security-Policy": "default-src 'self'", + "Cache-Control": "no-cache, no-store, must-revalidate", + "Content-Type": "application/json", + "Accept": "application/json", + "Access-Control-Allow-Origin": "http://localhost", + "Access-Control-Allow-Methods": "*", + "Access-Control-Allow-Headers": "*", +} + +class APIRequest(Skill): + """Skill for making API requests.""" + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + self.use_default_headers = False + self.default_headers = DEFAULT_HEADERS + self.max_retries = 1 + self.request_timeout = 5 + self.retry_delay = 5 + self.api_keys_dictionary = self.get_api_keys() + + super().__init__(config=config, settings=settings, wingman=wingman) + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + self.use_default_headers = self.retrieve_custom_property_value( + "use_default_headers", errors + ) + + self.max_retries = self.retrieve_custom_property_value( + "max_retries", errors + ) + self.request_timeout = self.retrieve_custom_property_value( + "request_timeout", errors + ) + self.retry_delay = self.retrieve_custom_property_value( + "retry_delay", errors + ) + + return errors + + + # Retrieve api key aliases in user api key file + def get_api_keys(self) -> dict: + api_key_holder = os.path.join(get_writable_dir("files"), "api_request_key_holder.yaml") + # If no key holder file is present yet, create it + if not os.path.isfile(api_key_holder): + os.makedirs(os.path.dirname(api_key_holder), exist_ok=True) + with open(api_key_holder, "w", encoding="utf-8") as file: + pass + # Open key holder file to read stored API keys + with open(api_key_holder, "r", encoding="UTF-8") as stream: + try: + parsed = yaml.safe_load(stream) + if isinstance(parsed, dict): # Ensure the parsed content is a dictionary + return parsed # Return the dictionary of alias/keys + except Exception as e: + return {} + + # Prepare and send API request using parameters provided by LLM response to function call + async def _send_api_request(self, parameters: Dict[str, Any]) -> str: + """Send an API request with the specified parameters.""" + # Get headers from LLM, check whether they are a dictionary, if not at least let user know in debug mode. + headers = parameters.get("headers") + if headers and isinstance(headers, dict): + if self.settings.debug_mode: + await self.printr.print_async( + f"Validated that headers returned from LLM is a dictionary.", + color=LogType.INFO, + ) + elif headers: + if self.settings.debug_mode: + await self.printr.print_async( + f"Headers returned from LLM is not a dictionary. Type is {type(headers)}", + color=LogType.INFO, + ) + else: + headers = {} + + # If using default headers, add those to AI generated headers + if self.use_default_headers: + headers.update(self.default_headers) # Defaults will override AI-generated if necessary + if self.settings.debug_mode: + await self.printr.print_async( + f"Default headers being used for API call: {headers}", + color=LogType.INFO, + ) + + # Get params, check whether they are a dictionary, if not, at least let user know in debug mode. + params = parameters.get("params") + if params and isinstance(params, dict): + if self.settings.debug_mode: + await self.printr.print_async( + f"Validated that params returned from LLM is a dictionary.", + color=LogType.INFO, + ) + elif params: + if self.settings.debug_mode: + await self.printr.print_async( + f"Params returned from LLM is not a dictionary. Type is {type(params)}", + color=LogType.INFO, + ) + else: + params = {} + + # Get body of request. First check to see if LLM returned a "data" field, and if so, whether data is a dictionary, if not, at least let the user know in debug mode. + body = parameters.get("data") + if body and isinstance(body, dict): + if self.settings.debug_mode: + await self.printr.print_async( + f"Validated that data returned from LLM is a dictionary.", + color=LogType.INFO, + ) + elif body: + if self.settings.debug_mode: + await self.printr.print_async( + f"Data returned from LLM is not a dictionary. Type is {type(body)}", + color=LogType.INFO, + ) + # 'data' was not present in parameters, so check if 'body' was provided instead. If so, check whether body is a dictionary, and if not, at least let the user know in debug mode. + else: + body = parameters.get("body") + if body and isinstance(body, dict): + if self.settings.debug_mode: + await self.printr.print_async( + f"Validated that body returned from LLM is a dictionary.", + color=LogType.INFO, + ) + elif body: + if self.settings.debug_mode: + await self.printr.print_async( + f"Body returned from LLM is not a dictionary. Type is {type(body)}", + color=LogType.INFO, + ) + else: + body = {} # Should this be None instead? + + # However we got the body for the request, try turning it into the valid json that aiohttp session.request expects for data field + try: + data = json.dumps(body) + except: + + if self.settings.debug_mode: + await self.printr.print_async( + f"Cannot convert data into valid json: {data}.", + ) + data = json.dumps({}) # Just send an empty dictionary if everything else failed + + # Try request up to max numner of retries + for attempt in range(1, self.max_retries + 1): + try: + async with aiohttp.ClientSession() as session: + async with session.request( + method=parameters["method"], + url=parameters["url"], + headers=headers, + params=params, + data=data, + timeout=self.request_timeout + ) as response: + response.raise_for_status() + + # Default to treating content as text if Content-Type is not specified + content_type = response.headers.get('Content-Type', '').lower() + if "application/json" in content_type: + return await response.text() + elif any(x in content_type for x in ["application/octet-stream", "application/", "audio/mpeg", "audio/wav", "audio/ogg", "image/jpeg", "image/png", "video/mp4", "application/pdf"]): + file_content = await response.read() + + # Determine appropriate file extension and name + if "audio/mpeg" in content_type: + file_extension = ".mp3" + elif "audio/wav" in content_type: + file_extension = ".wav" + elif "audio/ogg" in content_type: + file_extension = ".ogg" + elif "image/jpeg" in content_type: + file_extension = ".jpg" + elif "image/png" in content_type: + file_extension = ".png" + elif "video/mp4" in content_type: + file_extension = ".mp4" + elif "application/pdf" in content_type: + file_extension = ".pdf" + else: + file_extension = ".file" + timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") + file_name = f"downloaded_file_{timestamp}{file_extension}" # Use a default name or extract it from response headers if available + + if 'Content-Disposition' in response.headers: + disposition = response.headers['Content-Disposition'] + if 'filename=' in disposition: + file_name = disposition.split('filename=')[1].strip('"') + + files_directory = get_writable_dir("files") + file_path = os.path.join(files_directory, file_name) + with open(file_path, "wb") as file: + file.write(file_content) + + return f"File returned from API saved as {file_path}" + else: + return await response.text() + except (ClientError, asyncio.TimeoutError) as e: + if attempt <= self.max_retries: + if self.settings.debug_mode: + await self.printr.print_async( + f"Retrying API request due to: {e}.", + color=LogType.INFO, + ) + delay = self.retry_delay * (2 ** (attempt - 1)) + random.uniform(0, 0.1 * self.retry_delay) + await asyncio.sleep(delay) + else: + if self.settings.debug_mode: + await self.printr.print_async( + f"Error with api request: {e}.", + color=LogType.INFO, + ) + return f"Error, could not complete API request. Exception was: {e}." + except Exception as e: + if self.settings.debug_mode: + await self.printr.print_async( + f"Error with api request: {e}.", + color=LogType.INFO, + ) + return f"Error, could not complete API request. Reason was {e}." + + async def is_waiting_response_needed(self, tool_name: str) -> bool: + return True + + def get_tools(self) -> list[Tuple[str, Dict[str, Any]]]: + # Ensure api_keys_dictionary is populated, if not use placeholder + if not self.api_keys_dictionary: + self.api_keys_dictionary = {"Service":"API_key"} + + return [ + ( + "send_api_request", + { + "type": "function", + "function": { + "name": "send_api_request", + "description": "Send an API request with the specified method, headers, parameters, and body. Return the response back.", + "parameters": { + "type": "object", + "properties": { + "url": {"type": "string", "description": "The URL for the API request."}, + "method": {"type": "string", "description": "The HTTP method (GET, POST, PUT, PATCH, DELETE, etc.)."}, + "headers": {"type": "object", "description": "Headers for the API request."}, + "params": {"type": "object", "description": "URL parameters for the API request."}, + "data": {"type": "object", "description": "Body or payload for the API request."}, + }, + "required": ["url", "method"], + }, + }, + }, + ), + ( + "get_api_key", + { + "type": "function", + "function": { + "name": "get_api_key", + "description": "Obtain the API key needed for an API request.", + "parameters": { + "type": "object", + "properties": { + "api_key_alias": { + "type": "string", + "description": "The API key needed.", + "enum": list(self.api_keys_dictionary.keys()), + }, + }, + "required": ["api_key_alias"], + }, + }, + }, + ), + ] + + async def execute_tool(self, tool_name: str, parameters: Dict[str, Any]) -> Tuple[str, str]: + function_response = "Error with API request, could not complete." + instant_response = "" + if tool_name == "send_api_request": + self.start_execution_benchmark() + if self.settings.debug_mode: + await self.printr.print_async( + f"Calling API with the following parameters: {parameters}", + color=LogType.INFO, + ) + try: + function_response = await self._send_api_request(parameters) + + except Exception as e: + if self.settings.debug_mode: + await self.printr.print_async( + f"Unknown error with API call {e}", + color=LogType.INFO, + ) + + if self.settings.debug_mode: + await self.printr.print_async( + f"Response from API call: {function_response}", + color=LogType.INFO, + ) + + if self.settings.debug_mode: + await self.print_execution_time() + + if tool_name == "get_api_key": + self.start_execution_benchmark() + if self.settings.debug_mode: + await self.printr.print_async( + f"Calling get_api_key with parameters: {parameters}", + color=LogType.INFO, + ) + alias = parameters.get("api_key_alias", "Not found") + key = self.api_keys_dictionary.get(alias, None) + if key != None and key != "API_key": + function_response = f"{alias} API key is: {key}" + else: + function_response = f"Error. Could not retrieve {alias} API key. Not found." + + if self.settings.debug_mode: + await self.printr.print_async( + f"Response from get_api_key: {function_response}", + color=LogType.INFO, + ) + + if self.settings.debug_mode: + await self.print_execution_time() + + return function_response, instant_response \ No newline at end of file diff --git a/templates/migration/1_5_0/skills/api_request/requirements.txt b/templates/migration/1_5_0/skills/api_request/requirements.txt new file mode 100644 index 00000000..f7b472f4 --- /dev/null +++ b/templates/migration/1_5_0/skills/api_request/requirements.txt @@ -0,0 +1 @@ +aiohttp==3.9.5 \ No newline at end of file diff --git a/templates/migration/1_5_0/skills/ats_telemetry/default_config.yaml b/templates/migration/1_5_0/skills/ats_telemetry/default_config.yaml new file mode 100644 index 00000000..ebfc21d2 --- /dev/null +++ b/templates/migration/1_5_0/skills/ats_telemetry/default_config.yaml @@ -0,0 +1,236 @@ +name: ATSTelemetry +module: skills.ats_telemetry.main +category: truck_simulator +description: + en: Retrieve live game state information from American Truck Simulator/Euro Truck Simulator 2. Also includes a 'dispatch mode' that automatically observes key state changes and comments on them. + de: Erhalte live Spielstatus-Informationen von American Truck Simulator/Euro Truck Simulator 2. Enthält auch einen 'Dispatch-Modus', der automatisch wichtige Statusänderungen beobachtet und diese kommentiert. +hint: + en: Requires a DLL to in the /plugins directory of your game. If you enter the path to your ATS/ETS2 installation, the skill will move the DLL automatically. + de: Erfordert eine DLL im /plugins Verzeichnis des Spiels. Wenn du den Pfad zu deiner ATS/ETS2-Installation angibst, wird die DLL automatisch dorthin kopiert. +examples: + - question: + en: What is my current speed? + de: Was ist meine aktuelle Geschwindigkeit? + answer: + en: You're currently driving at 30 miles per hour. + de: Du fährst aktuell 30 Meilen pro Stunde. + - question: + en: Start the dispatch mode. + de: Starte den Dispatch-Modus. + answer: + en: (starts the dispatch mode) + de: (startet den Dispatch-Modus) +prompt: | + You can retrieve various game state variables from American Truck Simulator (ATS) and Euro Truck Simulator 2 (ETS). Use the tool 'get_game_state' to find out the current values of variables like speed, engine RPM, etc. + + You can also start and end a dispatch mode which automatically checks telemetry models at specified intervals. Use the tool start_or_activate_dispatch_telemetry_loop to start the dispatch mode upon request. Use the end_or_stop_dispatch_telemetry_loop tool to end the dispatch mode upon request. + + The available game telemetry variables are as follows. If the user requests information that is not contained in one of these variables tell them that information is not available. + onJob + plannedDistance (planned distance to the current destination) + jobFinished + jobCancelled + jobDelivered + jobStartingTime + jobFinishedTime + jobIncome + jobCancelledPenalty + jobDeliveredRevenue + jobDeliveredEarnedXp + jobDeliveredCargoDamage + jobDeliveredDistance + jobDeliveredAutoparkUsed + jobDeliveredAutoloadUsed + isCargoLoaded + specialJob + jobMarket (type of job market for job) + routeDistance (distance of current route) + fined + tollgate + ferry + train + refuel + refuelPayed + gears + gears_reverse + truckWheelCount + gear + gearDashboard + engineRpmMax + engineRpm + cruiseControlSpeed + airPressure + brakeTemperature + oilPressure (in psi) + oilTemperature + waterTemperature + batteryVoltage + wearEngine + wearTransmission + wearCabin + wearChassis + wearWheels + truckOdometer (reading of truck's odometer) + refuelAmount + cargoDamage + parkBrake + airPressureEmergency + fuelWarning + electricEnabled + engineEnabled + wipers + blinkerLeftOn + blinkerRightOn + lightsParking + lightsBeamLow + lightsBeamHigh + lightsBeacon + lightsBrake + lightsReverse + lightsHazards + cruiseControl + accelerationX + accelerationY + accelerationZ + coordinateX + coordinateY + coordinateZ + rotationX + rotationY + rotationZ + truckBrand + truckName + cargo + unitMass + cityDst + compDst + citySrc + compSrc + truckLicensePlate + truckLicensePlateCountry + fineOffence + ferrySourceName + ferryTargetName + trainSourceName + trainTargetName + fineAmount + tollgatePayAmount + ferryPayAmount + trainPayAmount + isEts2 (whether the current game is EuroTruckSimulator 2) + isAts (whether the current game is American Truck Simulator) + truckSpeed(current truck speed) + speedLimit (speed limit of current road) + currentFuelPercentage (percent of fuel remaining) + currentAdbluePercentage (percent of adblue remaining) + truckDamageRounded (estimate of current truck wear and damage) + wearTrailerRounded (estimate of current trailer wear) + gameTime (current time in game) + nextRestStopTime (how long until next rest stop) + routeTime(how long current route is expected to take to complete) + jobExpirationTimeInDaysHoursMinutes (amount of time until job expires and delivery is late) + isWorldOfTrucksContract (whether the current job is a contract from the World of Trucks platform) + gameTimeLapsedToCompleteJob (when a job is completed, the amount of in-game time it took to complete) + realLifeTimeToCompleteWorldofTrucksJob (when a World of Trucks platform job is completed, how much real life time it took) + cargoMassInTons (if specifically asked, the mass of the cargo in tons) + cargoMass (the mass of the cargo) + routeDistance (distance remaining to complete the current route) + truckFuelRange (approximate distance that can be driven with remaining fuel) + fuelTankSize (total fuel capacity) + fuelRemaining (how much fuel is left in the tank) + fuelConsumption (the rate at which fuel is currently being used) + adblueTankSize (total capacity of adblue tank) + adblueRemaining (amount of adblue remaining) + plannedDistance (estimated distance to destination) + trailer (contains a large amount of information in a dictionary about the trailer being used) +custom_properties: + - hint: Default is false and will attempt to use US Customary Units, like foot, yard, mile, and pound. Set to true to attempt to use metric units, like meters, kilometers, and kilograms. + id: use_metric_system + name: Use metric system + property_type: boolean + required: true + value: false + - hint: Path to the installation directory of American Truck Simulator. The skill will attempt to install the required game plugin for you. + id: ats_install_directory + name: American Truck Simulator Install Directory + property_type: string + required: false + value: C:\Program Files (x86)\Steam\steamapps\common\American Truck Simulator + - hint: Path to the installation directory of Euro Truck Simulator 2. The skill will attempt to install the required game plugin for you. + id: ets_install_directory + name: Euro Truck Simulator 2 Install Directory + property_type: string + required: false + value: C:\Program Files (x86)\Steam\steamapps\common\Euro Truck Simulator 2 + - hint: The backstory used for automatic dispatcher personality if active. Changed data is placed directly after this backstory for the LLM to generate its response. If you want the dispatcher to speak im a different language include that instruction here. + id: dispatcher_backstory + name: Dispatcher Backstory + property_type: textarea + required: true + value: | + You are a big rig truck dispatcher. Act in character at all times. + At your dispatch computer you have access to a data stream that shows you changes to key data for a truck you are responsible for dispatching. The available data fields are as follows: + onJob + plannedDistance (planned distance to the current destination) + jobFinished + jobCancelled + jobDelivered + jobStartingTime + jobFinishedTime + jobIncome + jobCancelledPenalty + jobDeliveredRevenue + jobDeliveredEarnedXp + jobDeliveredCargoDamage + jobDeliveredDistance + jobDeliveredAutoparkUsed + jobDeliveredAutoloadUsed + isCargoLoaded + specialJob + jobMarket (type of job market for job) + routeDistance (distance of current route) + fined + tollgate + ferry + train + refuel + refuelPayed + refuelAmount + cargoDamage + truckBrand + truckName + cargo + unitMass + cityDst + compDst + citySrc + compSrc + truckLicensePlate + truckLicensePlateCountry + fineOffence + fineAmount + isWorldOfTrucksContract (whether the current job is a contract from the World of Trucks platform) + gameTimeLapsedToCompleteJob (when a job is completed, the amount of in-game time it took to complete) + realLifeTimeToCompleteWorldofTrucksJob (when a World of Trucks platform job is completed, how much real life time it took) + cargoMassInTons (if specifically asked, the mass of the cargo in tons) + cargoMass (the mass of the cargo) + + React to the data and inform the truck driver. Here are some examples of how you might react: + Example 1: The following key data changed: onJob: True, last value was onJob: False, cargo: tractor, last value was cargo: ", cargoMass: 10000 lb, last value was cargoMass: 0 lb, cityDst: Stockton, last value was cityDst: ", "compDst": Walden, last value was compDst: ". + You would say something like: Dispatch here. Got you a new job, you'll be hauling a Tractor, weight is about ten thousand pounds, heading to Stockton to deliver to Waldens. Do you read me? + Example 2: The following key data changed: onJob: False, last value was onJob: True, jobCancelled: True, last value was jobCancelled: False, jobCancelledPenalty: 12000, last value was jobCancelledPenalty: 0. + You would say something like: This is dispatch. Really disappointed you cancelled that job. That will cost you 12,000 bucks. + Example 3: The following key data changed: fined: True, last value was fined: False, fineAmount: 500, last value was fineAmount: 0, fineOffence: speeding, last value was fineOffense: ". + You would say something like: Driver, dispatch here contacting you. We were just notified by the authorities that you were fined $500 for speeding. Watch it, you could get fired or lose your license if you keep that reckless behavior up! + Other style hints: Note that for events like fines and cargo damage, just focus on the fine or cargo damage in your reaction; avoid commenting on other variables not related to the fines or cargo damage in that circumstance. + For cargo damage events, just summarize the damage level in plain language, like very small damage, light damage, medium damage, heavy damage, leaving the exact number out of your reaction. + Remember to use "ten four" not "10-4" when speaking to the driver. + If you see the driver has finished a job, only provide information about that job, not any previous ones. + Important style note -- remember you are speaking to the driver, so use plain declarative sentences not emojis or lists since those cannot be easily verbalized. + Using those examples, and keeping in role as a dispatcher, react to this data stream: + - hint: Whether to try to autostart dispatch mode, which automatically monitors for key game data changes. This can also be toggled with a voice command to start or end dispatch mode. + id: autostart_dispatch_mode + name: Autostart dispatch mode + property_type: boolean + required: true + value: false diff --git a/templates/migration/1_5_0/skills/ats_telemetry/full_telemetry_example.txt b/templates/migration/1_5_0/skills/ats_telemetry/full_telemetry_example.txt new file mode 100644 index 00000000..d48a4597 --- /dev/null +++ b/templates/migration/1_5_0/skills/ats_telemetry/full_telemetry_example.txt @@ -0,0 +1,432 @@ +Printout of all data received from telemetry: { +'sdkActive': True, +'paused': False, +'time': 169193232, +'simulatedTime': 207008386, +'renderTime': 206980628, +'multiplayerTimeOffset': 0, +'telemetry_plugin_revision': 12, +'version_major': 1, +'version_minor': 5, +'game': 2, +'telemetry_version_game_major': 1, +'telemetry_version_game_minor': 5, +'time_abs': 9490, +'gears': 10, +'gears_reverse': 2, +'retarderStepCount': 0, +'truckWheelCount': 6, +'selectorCount': 1, +'time_abs_delivery': 9947, +'maxTrailerCount': 10, +'unitCount': 100, +'plannedDistanceKm': 293, +'shifterSlot': 0, +'retarderBrake': 0, +'lightsAuxFront': 0, +'lightsAuxRoof': 0, +'truck_wheelSubstance': [1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], +'hshifterPosition': [0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], +'hshifterBitmask': [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], +'jobDeliveredDeliveryTime': 0, +'jobStartingTime': 0, +'jobFinishedTime': 0, +'restStop': 826, +'gear': 9, +'gearDashboard': 9, +'hshifterResulting': [0, 0, -1, -2, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], +'jobDeliveredEarnedXp': 0, +'scale': 20.0, +'fuelCapacity': 568.0, +'fuelWarningFactor': 0.15000000596046448, +'adblueCapacity': 80.0, +'adblueWarningFactor': 0.15000000596046448, +'airPressureWarning': False, +'airPressurEmergency': 29.579999923706055, +'oilPressureWarning': False, +'waterTemperatureWarning': False, +'batteryVoltageWarning': False, +'engineRpmMax': 2100.0, +'gearDifferential': 2.8499999046325684, +'cargoMass': 24856.900390625, +'truckWheelRadius': [0.5072842240333557, 0.5072842240333557, 0.5072842240333557, 0.5072842240333557, 0.5072842240333557, 0.5072842240333557, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'gearRatiosForward': [15.420000076293945, 11.520000457763672, 8.550000190734863, 6.28000020980835, 4.670000076293945, 3.299999952316284, 2.4600000381469727, 1.8300000429153442, 1.340000033378601, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'gearRatiosReverse': [-18.18000030517578, -3.890000104904175, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'unitMass': 248.56900024414062, +'speed': 20.752456665039062, +'engineRpm': 1499.0980224609375, +'userSteer': -0.007064927369356155, +'userThrottle': 0.8415774703025818, +'userBrake': 0.0, +'userClutch': 0.0, +'gameSteer': -0.007064927369356155, +'gameThrottle': 0.8415682315826416, +'gameBrake': 0.0, +'gameClutch': 0.0, +'cruiseControlSpeed': 0.0, +'airPressure': 123.23286437988281, +'brakeTemperature': 31.129560470581055, +'fuel': 297.7748107910156, +'fuelAvgConsumption': 0.48035165667533875, +'fuelRange': 604.8551025390625, +'adblue': 66.48873901367188, +'oilPressure': 69.16273498535156, +'oilTemperature': 80.07545471191406, +'waterTemperature': 58.727928161621094, +'batteryVoltage': 13.919618606567383, +'lightsDashboard': 1.0, +'wearEngine': 1.9995579350506887e-05, +'wearTransmission': 1.9995579350506887e-05, +'wearCabin': 1.5815534425200894e-05, +'wearChassis': 1.976941894099582e-05, +'wearWheels': 0.0002029682946158573, +'truckOdometer': 136116.5, +'routeDistance': 285730.9375, +'routeTime': 12663.2265625, +'speedLimit': 24.587200164794922, +'truck_wheelSuspDeflection': [0.0005713463178835809, -0.001106309937313199, 0.0008573083905503154, -0.0002959136909339577, 0.003856382332742214, 0.0030263804364949465, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'truck_wheelVelocity': [6.519678592681885, 6.5074028968811035, 6.5195393562316895, 6.508543968200684, 6.51959753036499, 6.508599758148193, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'truck_wheelSteering': [-0.0007823744672350585, -0.000787581317126751, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'truck_wheelRotation': [0.3390186131000519, 0.7923967242240906, 0.6299140453338623, 0.8462331295013428, 0.5042362809181213, 0.7273635864257812, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'truck_wheelLift': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'truck_wheelLiftOffset': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'jobDeliveredCargoDamage': 0.0, +'jobDeliveredDistanceKm': 0.0, +'refuelAmount': 0.0, +'cargoDamage': 1.9082845028606243e-05, +'truckWheelSteerable': [True, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False], +'truckWheelSimulated': [True, True, True, True, True, True, False, False, False, False, False, False, False, False, False, False], +'truckWheelPowered': [False, False, True, True, True, True, False, False, False, False, False, False, False, False, False, False], +'truckWheelLiftable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], +'isCargoLoaded': True, 'specialJob': False, 'parkBrake': False, 'motorBrake': False, 'airPressureEmergency': False, +'fuelWarning': False, +'adblueWarning': False, +'electricEnabled': True, +'engineEnabled': True, +'wipers': False, +'blinkerLeftActive': False, +'blinkerRightActive': False, +'blinkerLeftOn': False, +'blinkerRightOn': False, +'lightsParking': False, +'lightsBeamLow': False, +'lightsBeamHigh': False, +'lightsBeacon': False, +'lightsBrake': False, +'lightsReverse': False, +'lightsHazards': False, +'cruiseControl': False, +'truckWheelOnGround': [True, True, True, True, True, True, False, False, False, False, False, False, False, False, False, False], +'shifterToggle': [False, False], +'differentialLock': False, +'liftAxle': False, +'liftAxleIndicator': False, +'trailerLiftAxle': False, +'trailerLiftAxleIndicator': False, +'jobDeliveredAutoparkUsed': False, +'jobDeliveredAutoloadUsed': False, +'cabinPositionX': 0.0, +'cabinPositionY': -2.0, +'headPositionX': -0.4050000011920929, +'headPositionY': -0.6500000953674316, +'headPositionZ': 0.5670000314712524, +'truckHookPositionX': 0.0, +'truckHookPositionY': 1.0, +'truckHookPositionZ': 1.5902955532073975, +'truckWheelPositionX': [-1.0399999618530273, 1.0399999618530273, -0.9319999814033508, 0.9319999814033508, -0.9319999814033508, 0.9319999814033508, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'truckWheelPositionY': [0.5059999823570251, 0.5059999823570251, 0.5059999823570251, 0.5059999823570251, 0.5059999823570251, 0.5059999823570251, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'truckWheelPositionZ': [-3.648953914642334, -3.648953914642334, 1.1511303186416626, 1.1511303186416626, 2.4958086013793945, 2.4958086013793945, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], +'lv_accelerationX': 0.057172566652297974, +'lv_accelerationY': -0.014844902791082859, +'lv_accelerationZ': -20.76361656188965, +'av_accelerationX': 0.0021989073138684034, +'av_accelerationY': -0.0029942377004772425, +'av_accelerationZ': 3.1783220038050786e-05, +'accelerationX': 0.0031964685767889023, +'accelerationY': -0.14801974594593048, +'accelerationZ': -0.3246815800666809, +'aa_accelerationX': 0.002690908033400774, +'aa_accelerationY': -2.8279449907131493e-05, +'aa_accelerationZ': -0.0010006398661062121, +'cabinAVX': 0.0, +'cabinAVY': 0.0, +'cabinAVZ': 0.0, +'cabinAAX': 0.0, +'cabinAAY': 0.0, +'cabinAAZ': 0.0, +'cabinOffsetX': 0.0, +'cabinOffsetY': 0.0, +'cabinOffsetZ': 0.0, +'cabinOffsetrotationX': 0.0, +'cabinOffsetrotationY': 0.0, +'cabinOffsetrotationZ': 0.0, +'headOffsetX': 0.054268285632133484, +'headOffsetY': -0.07522289454936981, +'headOffsetZ': -0.15859121084213257, +'headOffsetrotationX': 0.9506424069404602, +'headOffsetrotationY': 0.02964305691421032, +'headOffsetrotationZ': -0.009435677900910378, +'coordinateX': -56598.13327026367, +'coordinateY': 71.66679382324219, +'coordinateZ': -12153.561309814453, +'rotationX': 0.6621344685554504, +'rotationY': -0.001264609512872994, +'rotationZ': -0.00013648993626702577, +'truckBrandId': 'peterbilt', +'truckBrand': 'Peterbilt', +'truckId': 'vehicle.peterbilt.389', +'truckName': '389', +'cargoId': 'pt_579', +'cargo': 'Peterbilt Trucks', +'cityDstId': 'montrose', +'cityDst': 'Montrose', +'compDstId': 'pt_trk_dlr', +'compDst': 'Peterbilt', +'citySrcId': 'vernal', +'citySrc': 'Vernal', +'compSrcId': 'pt_trk_dlr', +'compSrc': 'Peterbilt', +'shifterType': 'arcade', +'truckLicensePlate': 'Z023831', +'truckLicensePlateCountryId': 'utah', +'truckLicensePlateCountry': 'Utah', +'jobMarket': 'quick_job', +'fineOffence': '', +'ferrySourceName': '', +'ferryTargetName': '', +'ferrySourceId': '', +'ferryTargetId': '', +'trainSourceName': '', +'trainTargetName': '', +'trainSourceId': '', +'trainTargetId': '', +'jobIncome': 7613, +'jobCancelledPenalty': 0, +'jobDeliveredRevenue': 0, +'fineAmount': 0, +'tollgatePayAmount': 0, +'ferryPayAmount': 0, +'trainPayAmount': 0, +'onJob': True, +'jobFinished': False, +'jobCancelled': False, +'jobDelivered': False, +'fined': False, +'tollgate': False, +'ferry': False, +'train': False, +'refuel': False, +'refuelPayed': False, +'substances': ['static', 'road', 'road_snow', 'dirt', 'snow', 'grass', 'road_dirt', 'invis', 'ice', 'metal', 'rubber', 'rumble_stripe', 'plastic', 'glass', 'wood', 'soft', 'road_smooth', 'road_coarse', 'gravel', 'concrete', '', '', '', '', ''], +'trailer': [{ + 'wheelSteerable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelSimulated': [True, True, True, True, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelPowered': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelLiftable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelOnGround': [True, True, True, True, False, False, False, False, False, False, False, False, False, False, False, False], + 'attached': True, 'wheelSubstance': [1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], + 'wheelCount': 4, + 'cargoDamage': 5.7248536904808134e-05, + 'wearChassis': 8.700502075953409e-05, + 'wearWheels': 0.00018925870244856924, + 'wearBody': 6.960401515243575e-05, + 'wheelSuspDeflection': [0.001207323046401143, -0.003518986515700817, 0.001354379579424858, -0.0014462756225839257, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelVelocity': [6.358851432800293, 6.348288536071777, 6.358870506286621, 6.348255157470703, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelSteering': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelRotation': [0.24736735224723816, 0.5280472636222839, 0.1254892796278, 0.41627106070518494, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelLift': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLiftOffset': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelRadius': [0.5199999809265137, 0.5199999809265137, 0.5199999809265137, 0.5199999809265137, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'linearVelocityX': 0.08644546568393707, + 'linearVelocityY': 0.027655865997076035, + 'linearVelocityZ': -20.75946044921875, + 'angularVelocityX': 0.0020713915582746267, + 'angularVelocityY': -0.0028369612991809845, + 'angularVelocityZ': -0.0009769933531060815, + 'linearAccelerationX': 0.003068926278501749, + 'linearAccelerationY': -0.29153236746788025, + 'linearAccelerationZ': -0.32241541147232056, + 'angularAccelerationX': 0.0033968703355640173, + 'angularAccelerationY': -0.00037714961217716336, + 'angularAccelerationZ': -0.0029267696663737297, + 'hookPositionX': 0.0, + 'hookPositionY': 1.0, + 'hookPositionZ': -4.928393840789795, + 'wheelPositionX': [-0.9680917263031006, 0.9680917263031006, -0.9728195667266846, 0.9728195667266846, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelPositionY': [0.5199999809265137, 0.5199999809265137, 0.5199999809265137, 0.5199999809265137, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'w + heelPositionZ': [4.919853687286377, 4.919853687286377, 6.113210201263428, 6.113210201263428, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'worldX': -56603.71405029297, + 'worldY': 71.73175048828125, + 'worldZ': -12156.945404052734, + 'rotationX': 0.6635493636131287, + 'rotationY': -0.0024805348366498947, + 'rotationZ': 0.0002618239086586982, + 'id': 'truck_transporter.mid2_firstx2esii', + 'cargoAcessoryId': 'truck_transporter.ptb_579_white1', + 'bodyType': '_tranitept79', + 'brandId': '', + 'brand': '', + 'name': '', + 'chainType': 'single', + 'licensePlate': '542856B', + 'licensePlateCountry': 'Utah', + 'licensePlateCountryId': 'utah'}, + {'wheelSteerable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelSimulated': [True, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelPowered': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelLiftable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelOnGround': [True, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'attached': True, + 'wheelSubstance': [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], + 'wheelCount': 2, + 'cargoDamage': 0.0, + 'wearChassis': 0.00010775496775750071, + 'wearWheels': 0.00019242058624513447, + 'wearBody': 8.620397420600057e-05, + 'wheelSuspDeflection': [0.00036682604695670307, -0.0007475137826986611, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelVelocity': [6.355575084686279, 6.345222473144531, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelSteering': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelRotation': [0.36591559648513794, 0.8230391144752502, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelLift': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelLiftOffset': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelRadius': [0.5199999809265137, 0.5199999809265137, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'linearVelocityX': 0.054376956075429916, + 'linearVelocityY': 0.10145216435194016, + 'linearVelocityZ': -20.747961044311523, + 'angularVelocityX': -0.001152990385890007, + 'angularVelocityY': -0.0027617125306278467, + 'angularVelocityZ': 0.00022915970475878567, + 'linearAccelerationX': 0.03915806859731674, + 'linearAccelerationY': 0.11982301622629166, + 'linearAccelerationZ': -0.28069794178009033, + 'angularAccelerationX': -0.009115384891629219, + 'angularAccelerationY': -0.0008358439081348479, + 'angularAccelerationZ': 0.00508680148050189, + 'hookPositionX': 0.0, + 'hookPositionY': 1.0, + 'hookPositionZ': -1.9945406913757324, + 'wheelPositionX': [-0.9746841788291931, 0.9746841788291931, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelPositionY': [0.5199999809265137, 0.5199999809265137, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelPositionZ': [4.300641059875488, 4.300641059875488, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'worldX': -56609.92657470703, + 'worldY': 71.82679748535156, + 'worldZ': -12160.68539428711, + 'rotationX': 0.6643381714820862, + 'rotationY': -0.0033320696093142033, + 'rotationZ': 0.0003402951988391578, + 'id': 'truck_transporter.mid_secondx2esii', + 'cargoAcessoryId': 'truck_transporter.ptb_579_white2', + 'bodyType': '', + 'brandId': '', + 'brand': '', + 'name': '', + 'chainType': '', + 'licensePlate': '601801S', + 'licensePlateCountry': 'Utah', + 'licensePlateCountryId': 'utah'}, + {'wheelSteerable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelSimulated': [True, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelPowered': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelLiftable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelOnGround': [True, True, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'attached': True, + 'wheelSubstance': [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], + 'wheelCount': 2, + 'cargoDamage': 0.0, + 'wearChassis': 0.00018087547505274415, + 'wearWheels': 0.00020356278400868177, + 'wearBody': 0.00014470038877334446, + 'wheelSuspDeflection': [0.0017827606061473489, 0.0009589004330337048, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelVelocity': [6.357532024383545, 6.348286151885986, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelSteering': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelRotation': [0.8401334881782532, 0.16890375316143036, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelLift': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelLiftOffset': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelRadius': [0.5199999809265137, 0.5199999809265137, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'linearVelocityX': 0.04934953898191452, + 'linearVelocityY': 0.07690752297639847, + 'linearVelocityZ': -20.75708770751953, + 'angularVelocityX': 0.0013052199501544237, + 'angularVelocityY': -0.0024662413634359837, + 'angularVelocityZ': 0.0001222444698214531, + 'linearAccelerationX': -0.026931315660476685, + 'linearAccelerationY': -0.13051074743270874, + 'linearAccelerationZ': -0.3180672526359558, + 'angularAccelerationX': 0.0027606417424976826, + 'angularAccelerationY': 0.0013949627755209804, + 'angularAccelerationZ': -0.013173911720514297, + 'hookPositionX': 0.0, + 'hookPositionY': 1.0, + 'hookPositionZ': -2.8044443130493164, + 'wheelPositionX': [-0.9746841788291931, 0.9746841788291931, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelPositionY': [0.5199999809265137, 0.5199999809265137, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelPositionZ': [3.570002555847168, 3.570002555847168, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'worldX': -56615.53796386719, + 'worldY': 71.93563842773438, + 'worldZ': -12164.023223876953, + 'rotationX': 0.6650192141532898, + 'rotationY': -0.0031634399201720953, + 'rotationZ': 0.0002087761095026508, + 'id': 'truck_transporter.mid_thirdx2esii', + 'cargoAcessoryId': 'truck_transporter.ptb_579_white3', + 'bodyType': '', + 'brandId': '', + 'brand': '', + 'name': '', + 'chainType': '', + 'licensePlate': '748736N', + 'licensePlateCountry': 'Utah', + 'licensePlateCountryId': 'utah'}, + {'wheelSteerable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelSimulated': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelPowered': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelLiftable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'wheelOnGround': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], + 'attached': False, + 'wheelSubstance': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], + 'wheelCount': 0, + 'cargoDamage': 0.0, + 'wearChassis': 0.0, + 'wearWheels': 0.0, + 'wearBody': 0.0, + 'wheelSuspDeflection': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelVelocity': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelSteering': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelRotation': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelLift': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelLiftOffset': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelRadius': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'linearVelocityX': 0.0, + 'linearVelocityY': 0.0, + 'linearVelocityZ': 0.0, + 'angularVelocityX': 0.0, + 'angularVelocityY': 0.0, + 'angularVelocityZ': 0.0, + 'linearAccelerationX': 0.0, + 'linearAccelerationY': 0.0, + 'linearAccelerationZ': 0.0, + 'angularAccelerationX': 0.0, + 'angularAccelerationY': 0.0, + 'angularAccelerationZ': 0.0, + 'hookPositionX': 0.0, + 'hookPositionY': 0.0, + 'hookPositionZ': 0.0, + 'wheelPositionX': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelPositionY': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'wheelPositionZ': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], + 'worldX': 0.0, + 'worldY': 0.0, + 'worldZ': 0.0, + 'rotationX': 0.0, + 'rotationY': 0.0, + 'rotationZ': 0.0, + 'id': '', + 'cargoAcessoryId': '', + 'bodyType': '', + 'brandId': '', + 'brand': '', + 'name': '', + 'chainType': '', + 'licensePlate': '', + 'licensePlateCountry': '', + 'licensePlateCountryId': ''}, + {'wheelSteerable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelSimulated': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelPowered': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelLiftable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelOnGround': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'attached': False, 'wheelSubstance': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'wheelCount': 0, 'cargoDamage': 0.0, 'wearChassis': 0.0, 'wearWheels': 0.0, 'wearBody': 0.0, 'wheelSuspDeflection': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelVelocity': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelSteering': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRotation': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLift': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLiftOffset': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRadius': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'linearVelocityX': 0.0, 'linearVelocityY': 0.0, 'linearVelocityZ': 0.0, 'angularVelocityX': 0.0, 'angularVelocityY': 0.0, 'angularVelocityZ': 0.0, 'linearAccelerationX': 0.0, 'linearAccelerationY': 0.0, 'linearAccelerationZ': 0.0, 'angularAccelerationX': 0.0, 'angularAccelerationY': 0.0, 'angularAccelerationZ': 0.0, 'hookPositionX': 0.0, 'hookPositionY': 0.0, 'hookPositionZ': 0.0, 'wheelPositionX': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionY': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionZ': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'worldX': 0.0, 'worldY': 0.0, 'worldZ': 0.0, 'rotationX': 0.0, 'rotationY': 0.0, 'rotationZ': 0.0, 'id': '', 'cargoAcessoryId': '', 'bodyType': '', 'brandId': '', 'brand': '', 'name': '', 'chainType': '', 'licensePlate': '', 'licensePlateCountry': '', 'licensePlateCountryId': ''}, {'wheelSteerable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelSimulated': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelPowered': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelLiftable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelOnGround': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'attached': False, 'wheelSubstance': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'wheelCount': 0, 'cargoDamage': 0.0, 'wearChassis': 0.0, 'wearWheels': 0.0, 'wearBody': 0.0, 'wheelSuspDeflection': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelVelocity': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelSteering': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRotation': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLift': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLiftOffset': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRadius': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'linearVelocityX': 0.0, 'linearVelocityY': 0.0, 'linearVelocityZ': 0.0, 'angularVelocityX': 0.0, 'angularVelocityY': 0.0, 'angularVelocityZ': 0.0, 'linearAccelerationX': 0.0, 'linearAccelerationY': 0.0, 'linearAccelerationZ': 0.0, 'angularAccelerationX': 0.0, 'angularAccelerationY': 0.0, 'angularAccelerationZ': 0.0, 'hookPositionX': 0.0, 'hookPositionY': 0.0, 'hookPositionZ': 0.0, 'wheelPositionX': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionY': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionZ': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'worldX': 0.0, 'worldY': 0.0, 'worldZ': 0.0, 'rotationX': 0.0, 'rotationY': 0.0, 'rotationZ': 0.0, 'id': '', 'cargoAcessoryId': '', 'bodyType': '', 'brandId': '', 'brand': '', 'name': '', 'chainType': '', 'licensePlate': '', 'licensePlateCountry': '', 'licensePlateCountryId': ''}, {'wheelSteerable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelSimulated': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelPowered': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelLiftable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelOnGround': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'attached': False, 'wheelSubstance': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'wheelCount': 0, 'cargoDamage': 0.0, 'wearChassis': 0.0, 'wearWheels': 0.0, 'wearBody': 0.0, 'wheelSuspDeflection': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelVelocity': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelSteering': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRotation': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLift': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLiftOffset': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRadius': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'linearVelocityX': 0.0, 'linearVelocityY': 0.0, 'linearVelocityZ': 0.0, 'angularVelocityX': 0.0, 'angularVelocityY': 0.0, 'angularVelocityZ': 0.0, 'linearAccelerationX': 0.0, 'linearAccelerationY': 0.0, 'linearAccelerationZ': 0.0, 'angularAccelerationX': 0.0, 'angularAccelerationY': 0.0, 'angularAccelerationZ': 0.0, 'hookPositionX': 0.0, 'hookPositionY': 0.0, 'hookPositionZ': 0.0, 'wheelPositionX': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionY': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionZ': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'worldX': 0.0, 'worldY': 0.0, 'worldZ': 0.0, 'rotationX': 0.0, 'rotationY': 0.0, 'rotationZ': 0.0, 'id': '', 'cargoAcessoryId': '', 'bodyType': '', 'brandId': '', 'brand': '', 'name': '', 'chainType': '', 'licensePlate': '', 'licensePlateCountry': '', 'licensePlateCountryId': ''}, {'wheelSteerable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelSimulated': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelPowered': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelLiftable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelOnGround': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'attached': False, 'wheelSubstance': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'wheelCount': 0, 'cargoDamage': 0.0, 'wearChassis': 0.0, 'wearWheels': 0.0, 'wearBody': 0.0, 'wheelSuspDeflection': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelVelocity': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelSteering': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRotation': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLift': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLiftOffset': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRadius': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'linearVelocityX': 0.0, 'linearVelocityY': 0.0, 'linearVelocityZ': 0.0, 'angularVelocityX': 0.0, 'angularVelocityY': 0.0, 'angularVelocityZ': 0.0, 'linearAccelerationX': 0.0, 'linearAccelerationY': 0.0, 'linearAccelerationZ': 0.0, 'angularAccelerationX': 0.0, 'angularAccelerationY': 0.0, 'angularAccelerationZ': 0.0, 'hookPositionX': 0.0, 'hookPositionY': 0.0, 'hookPositionZ': 0.0, 'wheelPositionX': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionY': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionZ': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'worldX': 0.0, 'worldY': 0.0, 'worldZ': 0.0, 'rotationX': 0.0, 'rotationY': 0.0, 'rotationZ': 0.0, 'id': '', 'cargoAcessoryId': '', 'bodyType': '', 'brandId': '', 'brand': '', 'name': '', 'chainType': '', 'licensePlate': '', 'licensePlateCountry': '', 'licensePlateCountryId': ''}, {'wheelSteerable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelSimulated': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelPowered': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelLiftable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelOnGround': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'attached': False, 'wheelSubstance': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'wheelCount': 0, 'cargoDamage': 0.0, 'wearChassis': 0.0, 'wearWheels': 0.0, 'wearBody': 0.0, 'wheelSuspDeflection': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelVelocity': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelSteering': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRotation': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLift': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLiftOffset': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRadius': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'linearVelocityX': 0.0, 'linearVelocityY': 0.0, 'linearVelocityZ': 0.0, 'angularVelocityX': 0.0, 'angularVelocityY': 0.0, 'angularVelocityZ': 0.0, 'linearAccelerationX': 0.0, 'linearAccelerationY': 0.0, 'linearAccelerationZ': 0.0, 'angularAccelerationX': 0.0, 'angularAccelerationY': 0.0, 'angularAccelerationZ': 0.0, 'hookPositionX': 0.0, 'hookPositionY': 0.0, 'hookPositionZ': 0.0, 'wheelPositionX': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionY': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionZ': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'worldX': 0.0, 'worldY': 0.0, 'worldZ': 0.0, 'rotationX': 0.0, 'rotationY': 0.0, 'rotationZ': 0.0, 'id': '', 'cargoAcessoryId': '', 'bodyType': '', 'brandId': '', 'brand': '', 'name': '', 'chainType': '', 'licensePlate': '', 'licensePlateCountry': '', 'licensePlateCountryId': ''}, {'wheelSteerable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelSimulated': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelPowered': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelLiftable': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'wheelOnGround': [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False], 'attached': False, 'wheelSubstance': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'wheelCount': 0, 'cargoDamage': 0.0, 'wearChassis': 0.0, 'wearWheels': 0.0, 'wearBody': 0.0, 'wheelSuspDeflection': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelVelocity': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelSteering': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRotation': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLift': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelLiftOffset': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelRadius': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'linearVelocityX': 0.0, 'linearVelocityY': 0.0, 'linearVelocityZ': 0.0, 'angularVelocityX': 0.0, 'angularVelocityY': 0.0, 'angularVelocityZ': 0.0, 'linearAccelerationX': 0.0, 'linearAccelerationY': 0.0, 'linearAccelerationZ': 0.0, 'angularAccelerationX': 0.0, 'angularAccelerationY': 0.0, 'angularAccelerationZ': 0.0, 'hookPositionX': 0.0, 'hookPositionY': 0.0, 'hookPositionZ': 0.0, 'wheelPositionX': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionY': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'wheelPositionZ': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], 'worldX': 0.0, 'worldY': 0.0, 'worldZ': 0.0, 'rotationX': 0.0, 'rotationY': 0.0, 'rotationZ': 0.0, 'id': '', 'cargoAcessoryId': '', 'bodyType': '', 'brandId': '', 'brand': '', 'name': '', 'chainType': '', 'licensePlate': '', 'licensePlateCountry': '', 'licensePlateCountryId': ''}]} \ No newline at end of file diff --git a/templates/migration/1_5_0/skills/ats_telemetry/logo.png b/templates/migration/1_5_0/skills/ats_telemetry/logo.png new file mode 100644 index 00000000..43fec71b Binary files /dev/null and b/templates/migration/1_5_0/skills/ats_telemetry/logo.png differ diff --git a/templates/migration/1_5_0/skills/ats_telemetry/main.py b/templates/migration/1_5_0/skills/ats_telemetry/main.py new file mode 100644 index 00000000..950e523c --- /dev/null +++ b/templates/migration/1_5_0/skills/ats_telemetry/main.py @@ -0,0 +1,860 @@ +import os +import shutil +import math +import copy +from datetime import datetime, timedelta +import requests +import truck_telemetry +from pyproj import Proj, transform +import time +from typing import TYPE_CHECKING +from api.interface import ( + SettingsConfig, + SkillConfig, + WingmanInitializationError, +) +from api.enums import LogType +from skills.skill_base import Skill +from services.file import get_writable_dir + + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + +class ATSTelemetry(Skill): + + def __init__(self, config: SkillConfig, settings: SettingsConfig, wingman: "OpenAiWingman") -> None: + self.already_initialized_telemetry = False + self.use_metric_system = False + self.telemetry_loop_cached_data = {} + self.telemetry_loop_data_points = [ + 'onJob', + 'plannedDistance', + 'jobFinished', + 'jobCancelled', + 'jobDelivered', + 'jobStartingTime', + 'jobFinishedTime', + 'jobIncome', + 'jobCancelledPenalty', + 'jobDeliveredRevenue', + 'jobDeliveredEarnedXp', + 'jobDeliveredCargoDamage', + 'jobDeliveredDistance', + 'jobDeliveredAutoparkUsed', + 'jobDeliveredAutoloadUsed', + 'isCargoLoaded', + 'specialJob', + 'jobMarket', + 'fined', + 'tollgate', + 'ferry', + 'train', + 'refuel', + 'refuelPayed', + 'refuelAmount', + 'cargoDamage', + 'truckBrand', + 'truckName', + 'cargo', + 'cityDst', + 'compDst', + 'citySrc', + 'compSrc', + 'truckLicensePlate', + 'truckLicensePlateCountry', + 'fineOffence', + 'fineAmount', + 'isWorldOfTrucksContract', + 'gameTimeLapsedToCompleteJob', + 'realLifeTimeToCompleteWorldofTrucksJob', + 'cargoMassInTons', + 'cargoMass', + ] + self.telemetry_loop_running = False + self.ats_install_directory = "" + self.ets_install_directory = "" + self.dispatcher_backstory = "" + self.autostart_dispatch_mode = False + # Define the ATS (American Truck Simulator) projection for use in converting in-game coordinates to real life + self.ats_proj = Proj( + proj='lcc', lat_1=33, lat_2=45, lat_0=39, lon_0=-96, units='m', k_0=0.05088, ellps='sphere' + ) + # Define the ETS2 (Euro Truck Simulator 2) projection and the UK projection for use in converting in-game coordinates to real life + ets2_scale = 1 / 19.35 + uk_scale = ets2_scale / 0.75 + + self.ets2_proj = Proj( + proj='lcc', lat_1=37, lat_2=65, lat_0=50, lon_0=15, units='m', k_0=ets2_scale, ellps='sphere' + ) + self.uk_proj = Proj( + proj='lcc', lat_1=37, lat_2=65, lat_0=50, lon_0=15, units='m', k_0=uk_scale, ellps='sphere' + ) + super().__init__(config=config, settings=settings, wingman=wingman) + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + self.use_metric_system = self.retrieve_custom_property_value( + "use_metric_system", errors + ) + self.ats_install_directory = self.retrieve_custom_property_value( + "ats_install_directory", errors + ) + self.ets_install_directory = self.retrieve_custom_property_value( + "ets_install_directory", errors + ) + self.dispatcher_backstory = self.retrieve_custom_property_value( + "dispatcher_backstory", errors + ) + # Default back to wingman backstory + if self.dispatcher_backstory == "" or self.dispatcher_backstory == " " or not self.dispatcher_backstory: + self.dispatcher_backstory = self.wingman.config.prompts.backstory + + self.autostart_dispatch_mode = self.retrieve_custom_property_value( + "autostart_dispatch_mode", errors + ) + await self.check_and_install_telemetry_dlls() + return errors + + # Try to find existing telemetry dlls, if not found, attempt to install + async def check_and_install_telemetry_dlls(self): + skills_filepath = get_writable_dir("skills") + ats_telemetry_skill_filepath = os.path.join(skills_filepath, "ats_telemetry") + sdk_dll_filepath = os.path.join(ats_telemetry_skill_filepath, "scs-telemetry.dll") + ats_plugins_dir = os.path.join(self.ats_install_directory, "bin\\win_x64\\plugins") + ets_plugins_dir = os.path.join(self.ets_install_directory, "bin\\win_x64\\plugins") + # Check and copy dll for ATS if applicable + if os.path.exists(self.ats_install_directory): + ats_dll_path = os.path.join(ats_plugins_dir, "scs-telemetry.dll") + if not os.path.exists(ats_dll_path): + try: + if not os.path.exists(ats_plugins_dir): + os.makedirs(ats_plugins_dir) + shutil.copy2(sdk_dll_filepath, ats_plugins_dir) + except Exception as e: + if self.settings.debug_mode: + await self.printr.print_async( + f"Could not install scs telemetry dll to {ats_plugins_dir}.", + color=LogType.INFO, + ) + # Check and copy dll for ETS if applicable + if os.path.exists(self.ets_install_directory): + ets_dll_path = os.path.join(ets_plugins_dir, "scs-telemetry.dll") + if not os.path.exists(ets_dll_path): + try: + if not os.path.exists(ets_plugins_dir): + os.makedirs(ets_plugins_dir) + shutil.copy2(sdk_dll_filepath, ets_plugins_dir) + except Exception as e: + if self.settings.debug_mode: + await self.printr.print_async( + f"Could not install scs telemetry dll to {ets_plugins_dir}.", + color=LogType.INFO, + ) + + # Start telemetry module connection with in-game telemetry SDK + async def initialize_telemetry(self) -> bool: + if self.settings.debug_mode: + await self.printr.print_async( + "Starting ATS / ETS telemetry module", + color=LogType.INFO, + ) + # truck_telemetry.init() requires the user to have installed the proper SDK DLL from https://github.com/RenCloud/scs-sdk-plugin/releases/tag/V.1.12.1 + # into the proper folder of their truck sim install (https://github.com/RenCloud/scs-sdk-plugin#installation), if they do not this step will fail, so need to catch the error. + try: + truck_telemetry.init() + return True + except: + if self.settings.debug_mode: + await self.printr.print_async( + f"Initialize ATSTelemetry function failed.", + color=LogType.INFO, + ) + return False + + # Initiate separate thread for constant checking of changes to key telemetry data points + async def initialize_telemetry_cache_loop(self, loop_time : int = 10): + if self.telemetry_loop_running: + return + + if self.settings.debug_mode: + await self.printr.print_async( + "Starting ATS / ETS telemetry cache loop", + color=LogType.INFO, + ) + self.threaded_execution(self.start_telemetry_loop, loop_time) + + # Loop every designated number of seconds to retrieve telemetry data and run query function to determine if any tracked data points have changed + async def start_telemetry_loop(self, loop_time: int): + if not self.telemetry_loop_running: + self.telemetry_loop_running = True + data = truck_telemetry.get_data() + filtered_data = await self.filter_data(data) + self.telemetry_loop_cached_data = copy.deepcopy(filtered_data) + while self.telemetry_loop_running: + changed_data = await self.query_and_compare_data(self.telemetry_loop_data_points) + if changed_data: + await self.initiate_llm_call_with_changed_data(changed_data) + time.sleep(loop_time) + + # Compare new telemetry data in monitored fields and react if there are changes + async def query_and_compare_data(self, data_points: list): + try: + default = "The following data changed: " + data_changed = default + data = truck_telemetry.get_data() + filtered_data = await self.filter_data(data) + if self.settings.debug_mode: + await self.printr.print_async( + "Querying and comparing telemetry data", + color=LogType.INFO, + ) + for point in data_points: + current_data = filtered_data.get(point) + if current_data and current_data != self.telemetry_loop_cached_data.get(point): + # Prevent relatively small data changes in float values from triggering new alerts, such as when cargo damage, route distance or route time change by very small values + if isinstance(current_data, float) and abs((current_data - self.telemetry_loop_cached_data.get(point)) / current_data) <= 0.25: + pass + else: + data_changed = data_changed + f"{point}:{current_data}, last value was {point}:{self.telemetry_loop_cached_data.get(point)}," + self.telemetry_loop_cached_data = copy.deepcopy(filtered_data) + if data_changed == default: + if self.settings.debug_mode: + await self.printr.print_async( + "No changed telemetry data found.", + color=LogType.INFO, + ) + return None + else: + if self.settings.debug_mode: + await self.printr.print_async( + data_changed, + color=LogType.INFO, + ) + return data_changed + + except Exception as e: + return None + + # Stop ongoing call for updated telemetry data + async def stop_telemetry_loop(self): + self.telemetry_loop_running = False + self.telemetry_loop_cached_data = {} + if self.settings.debug_mode: + await self.printr.print_async( + "Stopping ATS / ETS telemetry cache loop", + color=LogType.INFO, + ) + + # If telemetry data changed, get LLM to provide a verbal response to the user, without requiring the user to initiate a communication with the LLM + async def initiate_llm_call_with_changed_data(self, changed_data): + units_phrase = "You are located in the US, so use US Customary Units, like feet, yards, miles, and pounds in your responses, and convert metric or imperial formats to these units. All currency is in dollars." + if self.use_metric_system: + units_phrase = "Use the metric system like meters, kilometers, kilometers per hour, and kilograms in your responses." + user_content = f"{changed_data}" + messages = [ + { + 'role': 'system', + 'content': f""" + {self.dispatcher_backstory} + Acting in character at all times, react to the following changed information. + {units_phrase} + """, + }, + { + 'role': 'user', + 'content': user_content, + }, + ] + completion = await self.llm_call(messages) + response = completion.choices[0].message.content if completion and completion.choices else "" + + if not response: + return + + await self.printr.print_async( + text=f"Dispatch: {response}", + color=LogType.INFO, + source_name=self.wingman.name + ) + + self.threaded_execution(self.wingman.play_to_user, response, True) + await self.wingman.add_assistant_message(response) + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "get_game_state", + { + "type": "function", + "function": { + "name": "get_game_state", + "description": "Retrieve the current game state variable from American Truck Simulator.", + "parameters": { + "type": "object", + "properties": { + "variable": { + "type": "string", + "description": "The game state variable to retrieve (e.g., 'speed').", + } + }, + "required": ["variable"], + }, + }, + }, + ), + ( + "get_information_about_current_location", + { + "type": "function", + "function": { + "name": "get_information_about_current_location", + "description": "Used to provide more detailed information if the user asks a general question like 'where are we?', or 'what city are we in?'", + }, + }, + ), + ( + "start_or_activate_dispatch_telemetry_loop", + { + "type": "function", + "function": { + "name": "start_or_activate_dispatch_telemetry_loop", + "description": "Begin dispatch function, which will check telemetry at designated intervals.", + }, + }, + ), + ( + "end_or_stop_dispatch_telemetry_loop", + { + "type": "function", + "function": { + "name": "end_or_stop_dispatch_telemetry_loop", + "description": "End or stop dispatch function, to stop automatically checking telemetry at designated intervals.", + }, + }, + ), + ] + return tools + + async def execute_tool(self, tool_name: str, parameters: dict[str, any]) -> tuple[str, str]: + function_response = "" + instant_response = "" + + if tool_name == "get_game_state": + if not self.already_initialized_telemetry: + self.already_initialized_telemetry = await self.initialize_telemetry() + # If initialization of telemetry object fails because another instance is already running, try just checking getting the data as a fail safe; if still cannot get the data, trigger error + if not self.already_initialized_telemetry: + try: + test = truck_telemetry.get_data() + self.already_initialized_telemetry = True + except: + function_response = "Error trying to access truck telemetry data. It appears there is a problem with the module. Check to see if the game is running, and that you have installed the SDK in the proper location." + return function_response, instant_response + + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing get_game_state function with parameters: {parameters}", + color=LogType.INFO, + ) + + data = truck_telemetry.get_data() + filtered_data = await self.filter_data(data) + + if self.settings.debug_mode: + await self.printr.print_async( + f"Printout of all data received from telemetry: {filtered_data}", + color=LogType.INFO, + ) + + variable = parameters.get("variable") + if variable in filtered_data: + value = filtered_data[variable] + try: + string_value = str(value) + except: + string_value = "value could not be found." + function_response = f"The current value of '{variable}' is {string_value}." + if self.settings.debug_mode: + await self.printr.print_async( + f"Found variable result in telemetry for {variable}, {string_value}", + color=LogType.INFO, + ) + else: + function_response = f"Variable '{variable}' not found." + if self.settings.debug_mode: + await self.printr.print_async( + f"Could not locate variable result in telemetry for {variable}.", + color=LogType.INFO, + ) + + if self.settings.debug_mode: + await self.print_execution_time() + + elif tool_name == "start_or_activate_dispatch_telemetry_loop": + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing start_or_activate_dispatch_telemetry_loop", + color=LogType.INFO, + ) + + if self.telemetry_loop_running: + function_response = "Dispatch communications already open." + if self.settings.debug_mode: + await self.print_execution_time() + await self.printr.print_async( + f"Attempted to start dispatch communications loop but loop is already running", + color=LogType.INFO, + ) + + return function_response, instant_response + + if not self.already_initialized_telemetry: + self.already_initialized_telemetry = await self.initialize_telemetry() + # If initialization of telemetry object fails because another instance is already running, try just checking getting the data as a fail safe; if still cannot get the data, trigger error + if not self.already_initialized_telemetry: + try: + test = truck_telemetry.get_data() + self.already_initialized_telemetry = True + except: + function_response = "Error trying to access truck telemetry data. It appears there is a problem with the module. Check to see if the game is running, and that you have installed the SDK in the proper location." + return function_response, instant_response + + if not self.telemetry_loop_running: + await self.initialize_telemetry_cache_loop(10) + + if self.settings.debug_mode: + await self.print_execution_time() + + function_response = "Opened dispatch communications." + + elif tool_name == "end_or_stop_dispatch_telemetry_loop": + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing end_or_stop_dispatch_telemetry_loop", + color=LogType.INFO, + ) + + await self.stop_telemetry_loop() + + if self.settings.debug_mode: + await self.print_execution_time() + + function_response = "Closed dispatch communications." + + elif tool_name == "get_information_about_current_location": + if not self.already_initialized_telemetry: + self.already_initalized_telemetry = await self.initialize_telemetry() + # If initialization of telemetry object fails because another instance is already running, try just checking getting the data as a fail safe; if still cannot get the data, trigger error + if not self.already_initialized_telemetry: + try: + test = truck_telemetry.get_data() + self.already_initialized_telemetry = True + except: + function_response = "Error trying to access truck telemetry data. It appears there is a problem with the module. Check to see if the game is running, and that you have installed the SDK in the proper location." + return function_response, instant_response + + if self.settings.debug_mode: + self.start_execution_benchmark() + + data = truck_telemetry.get_data() + # Pull x, y coordinates from truck sim, note that there are x, y, z, so here z pertains to 2d y, and y pertains to altitude + x = data["coordinateX"] + y = data["coordinateZ"] + + # Convert to world latitude and longitude + longitude, latitude = await self.from_ats_coords_to_wgs84(data["coordinateX"], data["coordinateZ"]) + + if self.settings.debug_mode: + await self.printr.print_async( + f"Executing get_information_about_current_location function with coordinateX as {x} and coordinateZ as {y}, latitude returned was {latitude}, longitude returned was {longitude}.", + color=LogType.INFO, + ) + + place_info = await self.convert_lat_long_data_into_place_data(latitude, longitude) + + if self.settings.debug_mode: + await self.print_execution_time() + + if place_info: + function_response = f"Information regarding the approximate location we are near: {place_info}" + else: + function_response = "Unable to get more detailed information regarding the place based on the current truck coordinates." + + return function_response, instant_response + + # Function to autostart dispatch mode + async def autostart_dispatcher_mode(self): + telemetry_started = False + while not telemetry_started and self.loaded: + telemetry_started = await self.initialize_telemetry() + # Init could fail if truck telemetry module already started elsewhere, so make sure we cannot just query truck telemetry data yet to be sure it's not initialized + if not telemetry_started: + try: + test = truck_telemetry.get_data() + telemetry_started = True + except: + telemetry_started = False + # Try again in ten seconds; maybe user has not loaded up Truck Simulator yet + time.sleep(10) + if self.loaded: + await self.initialize_telemetry_cache_loop(10) + + # Autostart dispatch mode if option turned on in config + async def prepare(self) -> None: + self.loaded = True + if self.autostart_dispatch_mode: + self.threaded_execution(self.autostart_dispatcher_mode) + + # Unload telemetry module and stop any ongoing loop when config / program unloads + async def unload(self) -> None: + self.loaded = False + await self.stop_telemetry_loop() + truck_telemetry.deinit() + +# Helper Data Functions for Enhancing Telemetry Data Before Sending to LLM +# Adapted, revised from https://github.com/mike-koch/ets2-mobile-route-advisor/blob/master/dashboard.js + async def filter_data(self, data): + try: + # Set enhanced data to copy of all values of data + enhanced_data = copy.deepcopy(data) + + enhanced_data['isEts2'] = (data['game'] == 1) + enhanced_data['isAts'] = not enhanced_data['isEts2'] + + # Deal with speed variables, create new ones specific to MPH and KPH + truck_speed_mph = data['speed'] * 2.23694 + truck_speed_kph = data['speed'] * 3.6 + + if self.use_metric_system: + enhanced_data['truckSpeed'] = str(abs(round(truck_speed_kph))) + ' kilometers per hour' + enhanced_data['cruiseControlSpeed'] = str(round(data['cruiseControlSpeed'] * 3.6)) + ' kilometers per hour' + enhanced_data['speedLimit'] = str(round(data['speedLimit'] * 3.6)) + ' kilometers per hour' + else: + enhanced_data['truckSpeed'] = str(abs(round(truck_speed_mph))) + ' miles per hour' + enhanced_data['cruiseControlSpeed'] = str(round(data['cruiseControlSpeed'] * 2.23694)) + ' miles per hour' + enhanced_data['speedLimit'] = str(round(data['speedLimit'] * 2.23694)) + ' miles per hour' + + # Deal with shifter type, use more readable format + if data['shifterType'] in ['automatic', 'arcade']: + enhanced_data['gear'] = 'A' + str(data['gearDashboard']) if data['gearDashboard'] > 0 else ('R' + str(abs(data['gearDashboard'])) if data['gearDashboard'] < 0 else 'N') + else: + enhanced_data['gear'] = str(data['gearDashboard']) if data['gearDashboard'] > 0 else ('R' + str(abs(data['gearDashboard'])) if data['gearDashboard'] < 0 else 'N') + + # Convert percentages + enhanced_data['currentFuelPercentage'] = str(round((data['fuel'] / data['fuelCapacity']) * 100)) + ' percent of fuel remaining.' + enhanced_data['currentAdbluePercentage'] = str(round((data['adblue'] / data['adblueCapacity']) * 100)) + ' percent of adblue remaining.' + scs_truck_damage = await self.getDamagePercentage(data) + enhanced_data['truckDamageRounded'] = str(math.floor(scs_truck_damage)) + ' percent truck damage' + scs_trailer_one_damage = await self.getDamagePercentageTrailer(data) + enhanced_data['wearTrailerRounded'] = str(math.floor(scs_trailer_one_damage)) + ' percent trailer damage' + + # Convert times + days_hours_and_minutes = await self.convert_minutes_to_days_hours_minutes(data['time_abs']) + if self.use_metric_system: + enhanced_data['gameTime'] = await self.convert_to_clock_time(days_hours_and_minutes, 24) + else: + enhanced_data['gameTime'] = await self.convert_to_clock_time(days_hours_and_minutes, 12) + + job_start_days_hours_and_minutes = await self.convert_minutes_to_days_hours_minutes(data['jobStartingTime']) + if self.use_metric_system: + enhanced_data['jobStartingTime'] = await self.convert_to_clock_time(job_start_days_hours_and_minutes, 24) + else: + enhanced_data['jobStartingTime'] = await self.convert_to_clock_time(job_start_days_hours_and_minutes, 12) + + job_finish_days_hours_and_minutes = await self.convert_minutes_to_days_hours_minutes(data['jobFinishedTime']) + if self.use_metric_system: + enhanced_data['jobFinishedTime'] = await self.convert_to_clock_time(job_finish_days_hours_and_minutes, 24) + else: + enhanced_data['jobFinishedTime'] = await self.convert_to_clock_time(job_finish_days_hours_and_minutes, 12) + + next_rest_stop_time_array = await self.convert_minutes_to_days_hours_minutes(data['restStop']) + enhanced_data['nextRestStopTime'] = await self.processTimeDifferenceArray(next_rest_stop_time_array) + + route_time_in_days_hours_minutes = await self.convert_seconds_to_days_hours_minutes(data['routeTime']) + enhanced_data['routeTime'] = await self.processTimeDifferenceArray(route_time_in_days_hours_minutes) + route_expiration = await self.convert_minutes_to_days_hours_minutes(data['time_abs_delivery'] - data['time_abs']) + enhanced_data['jobExpirationTimeInDaysHoursMinutes'] = await self.processTimeDifferenceArray(route_expiration) + enhanced_data['isWorldOfTrucksContract'] = await self.isWorldOfTrucksContract(data) + + if enhanced_data['isWorldOfTrucksContract']: + job_ended_time = await self.getDaysHoursMinutesAndSeconds(data['jobFinishedTime']) + job_started_time = await self.getDaysHoursMinutesAndSeconds(data['jobStartingTime']) + time_to_complete_route_array = await self.convert_minutes_to_days_hours_minutes(data['jobFinishedTime'] - data['jobStartingTime']) + real_life_time_to_complete_route = await self.convert_minutes_to_days_hours_minutes(data['jobDeliveredDeliveryTime']) + enhanced_data['realLifeTimeToCompleteWorldofTrucksJob'] = await self.processTimeDifferenceArray(real_life_time_to_complete_route) + else: + time_to_complete_route_array = await self.convert_minutes_to_days_hours_minutes(data['jobDeliveredDeliveryTime']) + enhanced_data['gameTimeLapsedToCompleteJob'] = await self.processTimeDifferenceArray(time_to_complete_route_array) + + # Convert weights + tons = (data['cargoMass'] / 1000.0) + enhanced_data['cargoMassInTons'] = str(tons) + ' t' if data['trailer'][0]['attached'] else '' + if self.use_metric_system: + enhanced_data['cargoMass'] = str(round(data['cargoMass'])) + ' kg' if data['trailer'][0]['attached'] else '' + else: + enhanced_data['cargoMass'] = str(round(data['cargoMass'] * 2.20462)) + ' lb' if data['trailer'][0]['attached'] else '' + + # Convert distances + route_distance_km = data['routeDistance'] / 1000 + route_distance_miles = route_distance_km * 0.621371 + + if self.use_metric_system: + enhanced_data['routeDistance'] = str(math.floor(route_distance_km)) + ' kilometers' + enhanced_data['truckOdometer'] = str(round(data['truckOdometer'])) + ' kilometers' + enhanced_data['truckFuelRange'] = str(round(data['fuelRange'])) + ' kilometers' + enhanced_data['plannedDistance'] = str(round(data['plannedDistanceKm'])) + ' kilometers' + enhanced_data['jobDeliveredDistance'] = str(round(data['jobDeliveredDistanceKm'])) + ' kilometers' + else: + enhanced_data['routeDistance'] = str(math.floor(route_distance_miles)) + ' miles' + enhanced_data['truckOdometer'] = str(round(data['truckOdometer'] * 0.621371)) + ' miles' + enhanced_data['truckFuelRange'] = str(round(data['fuelRange'] * 0.621371)) + ' miles' + enhanced_data['plannedDistance'] = str(round(data['plannedDistanceKm'] * 0.621371)) + ' miles' + enhanced_data['jobDeliveredDistance'] = str(round(data['jobDeliveredDistanceKm'] * 0.621371)) + ' miles' + + # Add currency symbol to income, fines, payments + enhanced_data['jobIncome'] = await self.getCurrency(data['jobIncome']) + enhanced_data['fineAmount'] = await self.getCurrency(data['fineAmount']) + enhanced_data['tollgatePayAmount'] = await self.getCurrency(data['tollgatePayAmount']) + enhanced_data['ferryPayAmount'] = await self.getCurrency(data['ferryPayAmount']) + enhanced_data['trainPayAmount'] = await self.getCurrency(data['trainPayAmount']) + enhanced_data['jobDeliveredRevenue'] = await self.getCurrency(data['jobDeliveredRevenue']) + enhanced_data['jobCancelledPenalty'] = await self.getCurrency(data['jobCancelledPenalty']) + + + # Convert temperatures + if self.use_metric_system: + enhanced_data['brakeTemperature'] = str(round(data['brakeTemperature'])) + ' degrees Celsius' + enhanced_data['oilTemperature'] = str(round(data['oilTemperature'])) + ' degrees Celsius' + enhanced_data['waterTemperature'] = str(round(data['waterTemperature'])) + ' degrees Celsius' + else: + enhanced_data['brakeTemperature'] = str(round(data['brakeTemperature'] * 1.8 + 32)) + ' degrees Fahrenheit' + enhanced_data['oilTemperature'] = str(round(data['oilTemperature'] * 1.8 + 32)) + ' degrees Fahrenheit' + enhanced_data['waterTemperature'] = str(round(data['waterTemperature'] * 1.8 + 32)) + ' degrees Fahrenheit' + + # Convert volumes + if self.use_metric_system: + enhanced_data['fuelTankSize'] = 'Fuel tank can hold ' + str(round(data['fuelCapacity'])) + ' liters' + enhanced_data['fuelRemaining'] = str(round(data['fuel'])) + ' liters of fuel remaining' + enhanced_data['fuelConsumption'] = str(data['fuelAvgConsumption']) + ' liters per kilometer' + enhanced_data['adblueTankSize'] = 'Adblue tank can hold ' + str(round(data['adblueCapacity'])) + ' liters' + enhanced_data['adblueRemaining'] = str(round(data['adblue'])) + ' liters of adblue remaining' + enhanced_data['refuelAmount'] = str(round(data['refuelAmount'])) + ' liters' + else: + enhanced_data['fuelTankSize'] = 'Fuel tank can hold ' + str(round(data['fuelCapacity'] * 0.26417205)) + ' gallons' + enhanced_data['fuelRemaining'] = str(round(data['fuel'] * 0.26417205)) + ' gallons of fuel remaining' + enhanced_data['fuelConsumption'] = str(round(data['fuelAvgConsumption'] * 2.35214583)) + ' miles per gallon' + enhanced_data['adblueTankSize'] = 'Adblue tank can hold ' + str(round(data['adblueCapacity'] * 0.26417205)) + ' gallons' + enhanced_data['adblueRemaining'] = str(round(data['adblue'] * 0.26417205)) + ' gallons of adblue remaining' + enhanced_data['refuelAmount'] = str(round(data['refuelAmount'] * 0.26417205)) + ' gallons' + + return enhanced_data + + except Exception as e: + if self.settings.debug_mode: + await self.printr.print_async( + f'There was a problem with the filter_data function: {e}. Returning original data.', + color=LogType.INFO, + ) + return data + + # Convert an array of days, hours, minutes into clock time + async def convert_to_clock_time(self, timeArray, clockHours): + days = math.floor(timeArray[0]) + hours = math.floor(timeArray[1]) + minutes = math.floor(timeArray[2]) + if len(timeArray) > 3: + seconds = math.floor(timeArray[3]) + else: + seconds = 0 + + date = days % 7 # Get the remainder after dividing by 7 + days_of_week = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"] + day = days_of_week[date] + + if clockHours == 12: + period = "AM" + if hours >= 12: + period = "PM" + if hours > 12: + hours -= 12 + elif hours == 0: + hours = 12 # Midnight case + + return f"{day}, {hours:02}:{minutes:02} {period}" + elif clockHours == 24: + return f"{day}, {hours:02}:{minutes:02}" + + # Can be used for values that are in seconds like routeTime + async def convert_seconds_to_days_hours_minutes(self, seconds): + # Constants + seconds_per_day = 86400 + seconds_per_hour = 3600 + seconds_per_minute = 60 + + # Calculate the number of in-game days + days = seconds // seconds_per_day + + # Calculate the remaining seconds in the current in-game day + remaining_seconds = seconds % seconds_per_day + + # Calculate the current in-game hours + hours = remaining_seconds // seconds_per_hour + remaining_seconds %= seconds_per_hour + + # Calculate the current in-game minutes + minutes = remaining_seconds // seconds_per_minute + + # Calculate the remaining seconds + seconds = remaining_seconds % seconds_per_minute + + return [days, hours, minutes, seconds] + + # Can be used for values like time_abs that are in game minutes since beginning of game mode + async def convert_minutes_to_days_hours_minutes(self, minutes): + # Constants + minutes_per_day = 1440 + + # Calculate the number of in-game days + days = minutes // minutes_per_day + + # Calculate the remaining minutes in the current in-game day + remaining_minutes = minutes % minutes_per_day + + # Calculate the current in-game hours and minutes + hours = remaining_minutes // 60 + final_minutes = remaining_minutes % 60 + + return [days, hours, final_minutes] + + # Can be used for timestamps like time, simulatedTime, renderTime, should be converted to clock times + async def getDaysHoursMinutesAndSeconds(self, time): + dateTime = datetime.utcfromtimestamp(time) + return [dateTime.day, dateTime.hour, dateTime.minute, dateTime.second] + + async def addTime(self, time, days, hours, minutes, seconds): + dateTime = datetime.utcfromtimestamp(time) + return dateTime + timedelta(days=days, hours=hours, minutes=minutes, seconds=seconds) + + async def getTime(self, gameTime, timeUnits): + currentTime = datetime.utcfromtimestamp(gameTime) + formattedTime = currentTime.strftime('%a %I:%M %p' if timeUnits == 12 else '%a %H:%M') + return formattedTime + + async def getDamagePercentage(self, data): + return max(data['wearEngine'], + data['wearTransmission'], + data['wearCabin'], + data['wearChassis'], + data['wearWheels']) * 100 + + async def getDamagePercentageTrailer(self, data): + return max(data['trailer'][0]['wearChassis'], data['trailer'][0]['wearWheels'], data['trailer'][0]['wearBody']) * 100 + + async def processTimeDifferenceArray(self, timeArray): # To do, account for when calculation means person is late, like when its negative + final_time_string = "" + days = math.floor(timeArray[0]) + hours = math.floor(timeArray[1]) + minutes = math.floor(timeArray[2]) + if len(timeArray) > 3: + seconds = math.floor(timeArray[3]) + else: + seconds = 0 + + if days == 1: + final_time_string += f"{days} day " + elif days > 1: + final_time_string += f"{days} days " + + if hours == 1: + final_time_string += f"{hours} hour " + elif hours > 1: + final_time_string += f"{hours} hours " + + if minutes == 1: + final_time_string += f"{minutes} minute " + elif minutes > 1: + final_time_string += f"{minutes} minutes " + + if seconds == 1: + final_time_string += f"{seconds} second " + elif seconds > 1: + final_time_string += f"{seconds} seconds " + + return final_time_string + + async def isWorldOfTrucksContract(self, data): + return "external" in data['jobMarket'] # If external in job type means world of trucks contract + + async def getCurrency(self, money): + currencyCode = 'EUR' if self.use_metric_system else 'USD' + + if currencyCode == 'EUR': + return f"€{money}" + elif currencyCode == 'USD': + return f"${money}" + + +######## CODE TO CONVERT GAME COORDINATES TO REAL LIFE LAT / LONG ##### +# Adapted and modified from https://github.com/truckermudgeon/maps/blob/main/packages/libs/map/projections.ts and https://github.com/truckermudgeon/maps/blob/main/packages/apis/navigation/index.ts +# Should be passed coordinateX as X and coordinateZ as Y for use + + + async def from_ats_coords_to_wgs84(self, x, y): + # ATS coords are like LCC coords, except Y grows southward (its sign is reversed) + lcc_coords = (x, -y) + lon, lat = self.ats_proj(*lcc_coords, inverse=True) + return lon, lat + + async def from_ets2_coords_to_wgs84(self, x, y): + # Calais coordinates for UK detection + calais = (-31140, -5505) + is_uk = x < calais[0] and y < calais[1] - 100 + converter = self.uk_proj if is_uk else self.ets2_proj + + # Apply map offsets + x -= 16660 + y -= 4150 + + # Additional offsets for UK coordinates + if is_uk: + x -= 16650 + y -= 2700 + + # ETS2 coords are like LCC coords, except Y grows southward (its sign is reversed) + lcc_coords = (x, -y) + lon, lat = converter(*lcc_coords, inverse=True) + return lon, lat + + + async def convert_lat_long_data_into_place_data(self, latitude=None, longitude=None): + + # If no values still, for instance, when connection is made but no data yet, return None + if not latitude or not longitude: + return None + + # Set zoom level, see zoom documentation at https://nominatim.org/release-docs/develop/api/Reverse/ + zoom = 15 + + + if self.settings.debug_mode: + await self.printr.print_async( + f"Attempting query of OpenStreetMap Nominatum with parameters: {latitude}, {longitude}, zoom level: {zoom}", + color=LogType.INFO, + ) + + # Request data from openstreetmap nominatum api for reverse geocoding + url = f"https://nominatim.openstreetmap.org/reverse?format=jsonv2&lat={latitude}&lon={longitude}&zoom={zoom}&accept-language=en&extratags=1" + headers = { + 'User-Agent': f'ats_telemetry_skill {self.wingman.name}' + } + response = requests.get(url, headers=headers) + if response.status_code == 200: + return response.json() + else: + if self.settings.debug_mode: + await self.printr.print_async(f"API request failed to {url}, status code: {response.status_code}.", color=LogType.INFO) + return None \ No newline at end of file diff --git a/templates/migration/1_5_0/skills/ats_telemetry/readme.txt b/templates/migration/1_5_0/skills/ats_telemetry/readme.txt new file mode 100644 index 00000000..ad45b6e4 --- /dev/null +++ b/templates/migration/1_5_0/skills/ats_telemetry/readme.txt @@ -0,0 +1,276 @@ +README: + +This skill is special because it also requires a .DLL file to be placed in the game folder of American Truck Simulator or Euro Truck Simulator to work so that the game will send the required data to the skill. + +The DLL is already included in this skill folder, scs-telemetry.dll. It was sourced from https://github.com/RenCloud/scs-sdk-plugin (September 21, 2023 release, v.1.12.1) + +You can either manually download / move this file into your game directory in the bin/win_x64/plugins folder or you can point the skill to your install directories for ATS/ETS and save the skill and the skill will try to copy over that .dll file itself to the proper location. + +Here are the variables available through telemetry: + +### Common Unsigned Integers +- `time_abs` + +### Config Unsigned Integers +- `gears` +- `gears_reverse` +- `retarderStepCount` +- `truckWheelCount` +- `selectorCount` +- `time_abs_delivery` +- `maxTrailerCount` +- `unitCount` +- `plannedDistanceKm` + +### Truck Channel Unsigned Integers +- `shifterSlot` +- `retarderBrake` +- `lightsAuxFront` +- `lightsAuxRoof` +- `truck_wheelSubstance[16]` +- `hshifterPosition[32]` +- `hshifterBitmask[32]` + +### Gameplay Unsigned Integers +- `jobDeliveredDeliveryTime` +- `jobStartingTime` +- `jobFinishedTime` + +### Common Integers +- `restStop` + +### Truck Integers +- `gear` +- `gearDashboard` +- `hshifterResulting[32]` + +### Gameplay Integers +- `jobDeliveredEarnedXp` + +### Common Floats +- `scale` + +### Config Floats +- `fuelCapacity` +- `fuelWarningFactor` +- `adblueCapacity` +- `adblueWarningFactor` +- `airPressureWarning` +- `airPressurEmergency` +- `oilPressureWarning` +- `waterTemperatureWarning` +- `batteryVoltageWarning` +- `engineRpmMax` +- `gearDifferential` +- `cargoMass` +- `truckWheelRadius[16]` +- `gearRatiosForward[24]` +- `gearRatiosReverse[8]` +- `unitMass` + +### Truck Floats +- `speed` +- `engineRpm` +- `userSteer` +- `userThrottle` +- `userBrake` +- `userClutch` +- `gameSteer` +- `gameThrottle` +- `gameBrake` +- `gameClutch` +- `cruiseControlSpeed` +- `airPressure` +- `brakeTemperature` +- `fuel` +- `fuelAvgConsumption` +- `fuelRange` +- `adblue` +- `oilPressure` +- `oilTemperature` +- `waterTemperature` +- `batteryVoltage` +- `lightsDashboard` +- `wearEngine` +- `wearTransmission` +- `wearCabin` +- `wearChassis` +- `wearWheels` +- `truckOdometer` +- `routeDistance` +- `routeTime` +- `speedLimit` +- `truck_wheelSuspDeflection[16]` +- `truck_wheelVelocity[16]` +- `truck_wheelSteering[16]` +- `truck_wheelRotation[16]` +- `truck_wheelLift[16]` +- `truck_wheelLiftOffset[16]` + +### Gameplay Floats +- `jobDeliveredCargoDamage` +- `jobDeliveredDistanceKm` +- `refuelAmount` + +### Job Floats +- `cargoDamage` + +### Config Bools +- `truckWheelSteerable[16]` +- `truckWheelSimulated[16]` +- `truckWheelPowered[16]` +- `truckWheelLiftable[16]` +- `isCargoLoaded` +- `specialJob` + +### Truck Bools +- `parkBrake` +- `motorBrake` +- `airPressureWarning` +- `airPressureEmergency` +- `fuelWarning` +- `adblueWarning` +- `oilPressureWarning` +- `waterTemperatureWarning` +- `batteryVoltageWarning` +- `electricEnabled` +- `engineEnabled` +- `wipers` +- `blinkerLeftActive` +- `blinkerRightActive` +- `blinkerLeftOn` +- `blinkerRightOn` +- `lightsParking` +- `lightsBeamLow` +- `lightsBeamHigh` +- `lightsBeacon` +- `lightsBrake` +- `lightsReverse` +- `lightsHazard` +- `cruiseControl` +- `truck_wheelOnGround[16]` +- `shifterToggle[2]` +- `differentialLock` +- `liftAxle` +- `liftAxleIndicator` +- `trailerLiftAxle` +- `trailerLiftAxleIndicator` + +### Gameplay Bools +- `jobDeliveredAutoparkUsed` +- `jobDeliveredAutoloadUsed` + +### Config FVectors +- `cabinPositionX` +- `cabinPositionY` +- `cabinPositionZ` +- `headPositionX` +- `headPositionY` +- `headPositionZ` +- `truckHookPositionX` +- `truckHookPositionY` +- `truckHookPositionZ` +- `truckWheelPositionX[16]` +- `truckWheelPositionY[16]` +- `truckWheelPositionZ[16]` + +### Truck FVectors +- `lv_accelerationX` +- `lv_accelerationY` +- `lv_accelerationZ` +- `av_accelerationX` +- `av_accelerationY` +- `av_accelerationZ` +- `accelerationX` +- `accelerationY` +- `accelerationZ` +- `aa_accelerationX` +- `aa_accelerationY` +- `aa_accelerationZ` +- `cabinAVX` +- `cabinAVY` +- `cabinAVZ` +- `cabinAAX` +- `cabinAAY` +- `cabinAAZ` + +### Truck FPlacements +- `cabinOffsetX` +- `cabinOffsetY` +- `cabinOffsetZ` +- `cabinOffsetrotationX` +- `cabinOffsetrotationY` +- `cabinOffsetrotationZ` +- `headOffsetX` +- `headOffsetY` +- `headOffsetZ` +- `headOffsetrotationX` +- `headOffsetrotationY` +- `headOffsetrotationZ` + +### Truck DPlacements +- `coordinateX` +- `coordinateY` +- `coordinateZ` +- `rotationX` +- `rotationY` +- `rotationZ` + +### Config Strings +- `truckBrandId` +- `truckBrand` +- `truckId` +- `truckName` +- `cargoId` +- `cargo` +- `cityDstId` +- `cityDst` +- `compDstId` +- `compDst` +- `citySrcId` +- `citySrc` +- `compSrcId` +- `compSrc` +- `shifterType` +- `truckLicensePlate` +- `truckLicensePlateCountryId` +- `truckLicensePlateCountry` +- `jobMarket` + +### Gameplay Strings +- `fineOffence` +- `ferrySourceName` +- `ferryTargetName` +- `ferrySourceId` +- `ferryTargetId` +- `trainSourceName` +- `trainTargetName` +- `trainSourceId` +- `trainTargetId` + +### Config ULL +- `jobIncome` + +### Gameplay LL +- `jobCancelledPenalty` +- `jobDeliveredRevenue` +- `fineAmount` +- `tollgatePayAmount` +- `ferryPayAmount` +- `trainPayAmount` + +### Special Bools +- `onJob` +- `jobFinished` +- `jobCancelled` +- `jobDelivered` +- `fined` +- `tollgate` +- `ferry` +- `train` +- `refuel` +- `refuelPayed` + + +### Trailer Data +- `trailer[10]` \ No newline at end of file diff --git a/templates/migration/1_5_0/skills/ats_telemetry/requirements.txt b/templates/migration/1_5_0/skills/ats_telemetry/requirements.txt new file mode 100644 index 00000000..51de2db9 Binary files /dev/null and b/templates/migration/1_5_0/skills/ats_telemetry/requirements.txt differ diff --git a/templates/migration/1_5_0/skills/ats_telemetry/scs-telemetry.dll b/templates/migration/1_5_0/skills/ats_telemetry/scs-telemetry.dll new file mode 100644 index 00000000..1b8acff4 Binary files /dev/null and b/templates/migration/1_5_0/skills/ats_telemetry/scs-telemetry.dll differ diff --git a/templates/migration/1_5_0/skills/ats_telemetry/scs_telemetry_explanation.txt b/templates/migration/1_5_0/skills/ats_telemetry/scs_telemetry_explanation.txt new file mode 100644 index 00000000..9b414b54 --- /dev/null +++ b/templates/migration/1_5_0/skills/ats_telemetry/scs_telemetry_explanation.txt @@ -0,0 +1,248 @@ +- **SCS_TELEMETRY_trailers_count**: Maximum number of trailers supported by the telemetry SDK. +- **SCS_TELEMETRY_CONFIG_substances**: Configuration of the substances, indexed by substance. + - **id**: Internal identifier for the substance. +- **SCS_TELEMETRY_CONFIG_controls**: Static configuration of the controls. + - **shifter_type**: Type of the shifter (e.g., manual, automatic). +- **SCS_TELEMETRY_CONFIG_hshifter**: Configuration of the h-shifter. + - **selector_count**: Number of selectors (e.g., range, splitter toggles). + - **slot_gear**: Gear selected when requirements for this slot are met. + - **slot_handle_position**: Position of the h-shifter handle. + - **slot_selectors**: Bitmask of required on/off state of selectors. +- **SCS_TELEMETRY_CONFIG_truck**: Static configuration of the truck. + - **brand_id**: Internal identifier for the truck brand. + - **brand**: Localized brand name for display purposes. + - **id**: Internal identifier for the truck. + - **name**: Localized name for display purposes. + - **fuel_capacity**: Fuel tank capacity in liters. + - **fuel_warning_factor**: Fraction of fuel capacity that triggers a warning. + - **adblue_capacity**: AdBlue tank capacity in liters. + - **adblue_warning_factor**: Fraction of AdBlue capacity that triggers a warning. + - **air_pressure_warning**: Air pressure below which the warning activates. + - **air_pressure_emergency**: Air pressure below which the emergency brakes activate. + - **oil_pressure_warning**: Oil pressure below which the warning activates. + - **water_temperature_warning**: Water temperature above which the warning activates. + - **battery_voltage_warning**: Battery voltage below which the warning activates. + - **rpm_limit**: Maximum RPM value. + - **forward_gear_count**: Number of forward gears on the truck. + - **reverse_gear_count**: Number of reverse gears on the truck. + - **retarder_step_count**: Number of steps in the retarder. + - **cabin_position**: Position of the cabin in the vehicle space. + - **head_position**: Default head position in the cabin space. + - **hook_position**: Position of the trailer connection hook. + - **license_plate**: Vehicle license plate. + - **license_plate_country**: Name of the country for the license plate. + - **license_plate_country_id**: Internal identifier for the license plate country. + - **wheel_count**: Number of wheels on the truck. + - **wheel_position**: Positions of each wheel. +- **SCS_TELEMETRY_CONFIG_trailer**: Static configuration of the trailer. + - **id**: Internal identifier for the trailer. + - **cargo_accessory_id**: Internal identifier for the cargo accessory. + - **hook_position**: Position of the trailer hook. + - **brand_id**: Internal identifier for the trailer brand. + - **brand**: Localized brand name for display purposes. + - **name**: Localized name for display purposes. + - **chain_type**: Chain type description for the first trailer. + - **body_type**: Body type description for the first trailer. + - **license_plate**: Trailer license plate. + - **license_plate_country**: Name of the country for the license plate. + - **license_plate_country_id**: Internal identifier for the license plate country. + - **wheel_count**: Number of wheels on the trailer. + - **wheel_position**: Positions of each wheel on the trailer. +- **SCS_TELEMETRY_CONFIG_job**: Static configuration of the job. + - **cargo_id**: Internal identifier for the cargo. + - **cargo**: Localized cargo name for display purposes. + - **cargo_mass**: Mass of the cargo in kilograms. + - **destination_city_id**: Internal identifier for the destination city. + - **destination_city**: Localized name for the destination city. + - **source_city_id**: Internal identifier for the source city. + - **source_city**: Localized name for the source city. + - **destination_company_id**: Internal identifier for the destination company. + - **destination_company**: Localized name for the destination company. + - **source_company_id**: Internal identifier for the source company. + - **source_company**: Localized name for the source company. + - **income**: Expected income for the job without penalties. + - **delivery_time**: In-game time for job delivery. + - **is_cargo_loaded**: Boolean indicating if the cargo is loaded. + - **job_market**: Type of job market (e.g., cargo_market, quick_job). + - **special_job**: Flag indicating whether the job is a special transport job. + - **planned_distance_km**: Planned job distance in simulated kilometers.- **SCS_TELEMETRY_trailers_count**: Maximum number of trailers supported by the telemetry SDK. +- **SCS_TELEMETRY_CONFIG_substances**: Configuration of the substances, indexed by substance. + - **id**: Internal identifier for the substance. +- **SCS_TELEMETRY_CONFIG_controls**: Static configuration of the controls. + - **shifter_type**: Type of the shifter (e.g., manual, automatic). +- **SCS_TELEMETRY_CONFIG_hshifter**: Configuration of the h-shifter. + - **selector_count**: Number of selectors (e.g., range, splitter toggles). + - **slot_gear**: Gear selected when requirements for this slot are met. + - **slot_handle_position**: Position of the h-shifter handle. + - **slot_selectors**: Bitmask of required on/off state of selectors. +- **SCS_TELEMETRY_CONFIG_truck**: Static configuration of the truck. + - **brand_id**: Internal identifier for the truck brand. + - **brand**: Localized brand name for display purposes. + - **id**: Internal identifier for the truck. + - **name**: Localized name for display purposes. + - **fuel_capacity**: Fuel tank capacity in liters. + - **fuel_warning_factor**: Fraction of fuel capacity that triggers a warning. + - **adblue_capacity**: AdBlue tank capacity in liters. + - **adblue_warning_factor**: Fraction of AdBlue capacity that triggers a warning. + - **air_pressure_warning**: Air pressure below which the warning activates. + - **air_pressure_emergency**: Air pressure below which the emergency brakes activate. + - **oil_pressure_warning**: Oil pressure below which the warning activates. + - **water_temperature_warning**: Water temperature above which the warning activates. + - **battery_voltage_warning**: Battery voltage below which the warning activates. + - **rpm_limit**: Maximum RPM value. + - **forward_gear_count**: Number of forward gears on the truck. + - **reverse_gear_count**: Number of reverse gears on the truck. + - **retarder_step_count**: Number of steps in the retarder. + - **cabin_position**: Position of the cabin in the vehicle space. + - **head_position**: Default head position in the cabin space. + - **hook_position**: Position of the trailer connection hook. + - **license_plate**: Vehicle license plate. + - **license_plate_country**: Name of the country for the license plate. + - **license_plate_country_id**: Internal identifier for the license plate country. + - **wheel_count**: Number of wheels on the truck. + - **wheel_position**: Positions of each wheel. +- **SCS_TELEMETRY_CONFIG_trailer**: Static configuration of the trailer. + - **id**: Internal identifier for the trailer. + - **cargo_accessory_id**: Internal identifier for the cargo accessory. + - **hook_position**: Position of the trailer hook. + - **brand_id**: Internal identifier for the trailer brand. + - **brand**: Localized brand name for display purposes. + - **name**: Localized name for display purposes. + - **chain_type**: Chain type description for the first trailer. + - **body_type**: Body type description for the first trailer. + - **license_plate**: Trailer license plate. + - **license_plate_country**: Name of the country for the license plate. + - **license_plate_country_id**: Internal identifier for the license plate country. + - **wheel_count**: Number of wheels on the trailer. + - **wheel_position**: Positions of each wheel on the trailer. +- **SCS_TELEMETRY_CONFIG_job**: Static configuration of the job. + - **cargo_id**: Internal identifier for the cargo. + - **cargo**: Localized cargo name for display purposes. + - **cargo_mass**: Mass of the cargo in kilograms. + - **destination_city_id**: Internal identifier for the destination city. + - **destination_city**: Localized name for the destination city. + - **source_city_id**: Internal identifier for the source city. + - **source_city**: Localized name for the source city. + - **destination_company_id**: Internal identifier for the destination company. + - **destination_company**: Localized name for the destination company. + - **source_company_id**: Internal identifier for the source company. + - **source_company**: Localized name for the source company. + - **income**: Expected income for the job without penalties. + - **delivery_time**: In-game time for job delivery. + - **is_cargo_loaded**: Boolean indicating if the cargo is loaded. + - **job_market**: Type of job market (e.g., cargo_market, quick_job). + - **special_job**: Flag indicating whether the job is a special transport job. + - **planned_distance_km**: Planned job distance in simulated kilometers. +- **truck.world.placement**: Represents world space position and orientation of the truck. (Type: dplacement) +- **truck.local.velocity.linear**: Vehicle space linear velocity in meters per second. (Type: fvector) +- **truck.local.velocity.angular**: Vehicle space angular velocity in rotations per second. (Type: fvector) +- **truck.local.acceleration.linear**: Vehicle space linear acceleration in meters per second². (Type: fvector) +- **truck.local.acceleration.angular**: Vehicle space angular acceleration in rotations per second². (Type: fvector) +- **truck.cabin.offset**: Vehicle space position and orientation delta of the cabin from its default position. (Type: fplacement) +- **truck.cabin.velocity.angular**: Cabin space angular velocity in rotations per second. (Type: fvector) +- **truck.cabin.acceleration.angular**: Cabin space angular acceleration in rotations per second². (Type: fvector) +- **truck.head.offset**: Cabin space position and orientation delta of the driver’s head from its default position. (Type: fplacement) +- **truck.speed**: Speedometer speed in meters per second. Uses negative value for reverse movement. (Type: float) +- **truck.engine.rpm**: RPM of the engine. (Type: float) +- **truck.engine.gear**: Gear currently selected in the engine; >0 for forward, 0 for neutral, <0 for reverse. (Type: s32) +- **truck.displayed.gear**: Gear currently displayed on the dashboard; >0 for forward, 0 for neutral, <0 for reverse. (Type: s32) +- **truck.input.steering**: Steering received from input, ranged from -1 to 1, counterclockwise. (Type: float) +- **truck.input.throttle**: Throttle received from input, ranged from 0 to 1. (Type: float) +- **truck.input.brake**: Brake received from input, ranged from 0 to 1. (Type: float) +- **truck.input.clutch**: Clutch received from input, ranged from 0 to 1. (Type: float) +- **truck.effective.steering**: Steering as used by the simulation, ranged from -1 to 1, counterclockwise. (Type: float) +- **truck.effective.throttle**: Throttle pedal input as used by the simulation, ranged from 0 to 1. (Type: float) +- **truck.effective.brake**: Brake pedal input as used by the simulation, ranged from 0 to 1. (Type: float) +- **truck.effective.clutch**: Clutch pedal input as used by the simulation, ranged from 0 to 1. (Type: float) +- **truck.cruise_control**: Speed selected for cruise control in meters per second; 0 if disabled. (Type: float) +- **truck.hshifter.slot**: Gearbox slot the h-shifter handle is currently in; 0 means no slot is selected. (Type: u32) +- **truck.hshifter.select**: Enabled state of range/splitter selector toggles. (Type: indexed bool) +- **truck.brake.parking**: Whether the parking brake is enabled. (Type: bool) +- **truck.brake.motor**: Whether the engine brake is enabled. (Type: bool) +- **truck.brake.retarder**: Current level of the retarder; ranged from 0 to max. (Type: u32) +- **truck.brake.air.pressure**: Pressure in the brake air tank in psi. (Type: float) +- **truck.brake.air.pressure.warning**: Whether air pressure warning is active. (Type: bool) +- **truck.brake.air.pressure.emergency**: Whether emergency brakes are active due to low air pressure. (Type: bool) +- **truck.brake.temperature**: Temperature of the brakes in degrees Celsius. (Type: float) +- **truck.fuel.amount**: Amount of fuel in liters. (Type: float) +- **truck.fuel.warning**: Whether low fuel warning is active. (Type: bool) +- **truck.fuel.consumption.average**: Average fuel consumption in liters/km. (Type: float) +- **truck.fuel.range**: Estimated range with current amount of fuel in km. (Type: float) +- **truck.adblue**: Amount of AdBlue in liters. (Type: float) +- **truck.adblue.warning**: Whether low AdBlue warning is active. (Type: bool) +- **truck.adblue.consumption.average**: Average AdBlue consumption in liters/km. (Type: float) +- **truck.oil.pressure**: Pressure of the oil in psi. (Type: float) +- **truck.oil.pressure.warning**: Whether oil pressure warning is active. (Type: bool) +- **truck.oil.temperature**: Temperature of the oil in degrees Celsius. (Type: float) +- **truck.water.temperature**: Temperature of the water in degrees Celsius. (Type: float) +- **truck.water.temperature.warning**: Whether water temperature warning is active. (Type: bool) +- **truck.battery.voltage**: Voltage of the battery in volts. (Type: float) +- **truck.battery.voltage.warning**: Whether battery voltage warning is active. (Type: bool) +- **truck.electric.enabled**: Whether electric is enabled. (Type: bool) +- **truck.engine.enabled**: Whether engine is enabled. (Type: bool) +- **truck.lblinker**: Whether the left blinker is enabled. (Type: bool) +- **truck.rblinker**: Whether the right blinker is enabled. (Type: bool) +- **truck.hazard.warning**: Whether the hazard warning light is enabled. (Type: bool) +- **truck.light.lblinker**: Whether the left blinker light is on. (Type: bool) +- **truck.light.rblinker**: Whether the right blinker light is on. (Type: bool) +- **truck.light.parking**: Whether the parking lights are enabled. (Type: bool) +- **truck.light.beam.low**: Whether the low beam lights are enabled. (Type: bool) +- **truck.light.beam.high**: Whether the high beam lights are enabled. (Type: bool) +- **truck.light.aux.front**: Intensity of the auxiliary front lights; 1 for dimmed, 2 for full. (Type: u32) +- **truck.light.aux.roof**: Intensity of the auxiliary roof lights; 1 for dimmed, 2 for full. (Type: u32) +- **truck.light.beacon**: Whether the beacon lights are enabled. (Type: bool) +- **truck.light.brake**: Whether the brake light is active. (Type: bool) +- **truck.light.reverse**: Whether the reverse light is active. (Type: bool) +- **truck.wipers**: Whether the wipers are enabled. (Type: bool) +- **truck.dashboard.backlight**: Intensity of the dashboard backlight as a factor (0;1). (Type: float) +- **truck.differential_lock**: Whether the differential lock is enabled. (Type: bool) +- **truck.lift_axle**: Whether the lift axle control is set to the lifted state. (Type: bool) +- **truck.lift_axle.indicator**: Whether the lift axle indicator is lit. (Type: bool) +- **truck.trailer.lift_axle**: Whether the trailer lift axle control is set to the lifted state. (Type: bool) +- **truck.trailer.lift_axle.indicator**: Whether the trailer lift axle indicator is lit. (Type: bool) +- **truck.wear.engine**: Wear of the engine accessory as a factor (0;1). (Type: float) +- **truck.wear.transmission**: Wear of the transmission accessory as a factor (0;1). (Type: float) +- **truck.wear.cabin**: Wear of the cabin accessory as a factor (0;1). (Type: float) +- **truck.wear.chassis**: Wear of the chassis accessory as a factor (0;1). (Type: float) +- **truck.wear.wheels**: Average wear of the wheel accessories as a factor (0;1). (Type: float) +- **truck.odometer**: Value of the odometer in kilometers. (Type: float) +- **truck.navigation.distance**: Truck’s navigation distance in meters, used by the advisor. (Type: float) +- **truck.navigation.time**: Truck’s navigation estimated time of arrival in seconds, used by the advisor. (Type: float) +- **truck.navigation.speed.limit**: Truck’s navigation speed limit in meters per second, used by the advisor. (Type: float) +- **truck.wheel.suspension.deflection**: Vertical displacement of the wheel from its axle in meters. (Type: indexed float) +- **truck.wheel.on_ground**: Whether the wheel is in contact with the ground. (Type: indexed bool) +- **truck.wheel.substance**: Substance below the wheel, indexed by substance ID. (Type: indexed u32) +- **truck.wheel.angular_velocity**: Angular velocity of the wheel in rotations per second. (Type: indexed float) +- **truck.wheel.steering**: Steering rotation of the wheel in rotations, from -0.25 to 0.25. (Type: indexed float) +- **truck.wheel.rotation**: Rolling rotation of the wheel in rotations, from 0.0 to 1.0. (Type: indexed float) +- **truck.wheel.lift**: Lift state of the wheel, from 0 to 1. (Type: indexed float) +- **truck.wheel.lift.offset**: Vertical displacement of the wheel axle due to lifting, in meters. (Type: indexed float) + + +- **job.cancelled**: Event triggered when a job is cancelled. + - **cancel.penalty**: Penalty for cancelling the job in game currency (Type: s64). +- **job.delivered**: Event triggered when a job is delivered. + - **revenue**: Job revenue in game currency (Type: s64). + - **earned_xp**: XP received for the job (Type: s32). + - **cargo_damage**: Total cargo damage ranging from 0.0 to 1.0 (Type: float). + - **distance_km**: Real distance covered on the job in kilometers (Type: float). + - **delivery_time**: Total time spent on the job in game minutes (Type: u32). + - **auto_park_used**: Boolean indicating if auto parking was used (Type: bool). + - **auto_load_used**: Boolean indicating if auto loading was used (always true for non-cargo market jobs) (Type: bool). +- **player.fined**: Event triggered when the player gets fined. + - **fine_offence**: Type of offence resulting in the fine (e.g., crash, speeding, no lights) (Type: string). + - **fine_amount**: Fine amount in game currency (Type: s64). +- **player.tollgate.paid**: Event triggered when the player pays a tollgate fee. + - **pay_amount**: Amount paid for the tollgate in game currency (Type: s64). +- **player.use.ferry**: Event triggered when the player uses a ferry. + - **pay_amount**: Amount paid for the ferry in game currency (Type: s64). + - **source_name**: Name of the ferry departure location (Type: string). + - **target_name**: Name of the ferry destination location (Type: string). + - **source_id**: ID of the ferry departure location (Type: string). + - **target_id**: ID of the ferry destination location (Type: string). +- **player.use.train**: Event triggered when the player uses a train. + - **pay_amount**: Amount paid for the train in game currency (Type: s64). + - **source_name**: Name of the train departure location (Type: string). + - **target_name**: Name of the train destination location (Type: string). + - **source_id**: ID of the train departure location (Type: string). + - **target_id**: ID of the train destination location (Type: string). \ No newline at end of file diff --git a/templates/migration/1_5_0/skills/auto_screenshot/default_config.yaml b/templates/migration/1_5_0/skills/auto_screenshot/default_config.yaml new file mode 100644 index 00000000..f45fc65c --- /dev/null +++ b/templates/migration/1_5_0/skills/auto_screenshot/default_config.yaml @@ -0,0 +1,41 @@ +name: AutoScreenshot +module: skills.auto_screenshot.main +category: general +description: + en: Take screenshots on command or automatically when you show excitement. + de: Mache Screenshots auf Befehl oder automatisch wenn du Aufregung zeigst. +examples: + - question: + en: Take a screenshot + de: Mache einen Screenshot + answer: + en: (takes a screenshot of the focused window) + de: (macht einen Screenshot des fokussierten Fensters) + - question: + en: Oh wow, this is crazy! + de: Oh wow, das ist verrückt! + answer: + en: (takes a screenshot of the focused window) + de: (macht einen Screenshot des fokussierten Fensters) +prompt: | + You can take a screenshot of the focused window when asked by the user. You can also take a screenshot when you infer something important, exciting, scary or interesting is going on where having a screenshot would create a nice memory of the moment for the user. + Use the 'take_screenshot' tool to capture the screenshot, and provide a reason for why you are doing so, such as because of the request of the user or why you decided it would be good to take a screenshot. + Example 1: User says something like "oh wow" or "oh no" or "this is crazy!" + Your response: (use take_screenshot tool with "exciting moment" reason) + Example 2: User says "take a screenshot" + Your response: (use take_screenshot tool with "user request" reason) + Example 3: User says "look at my screen and tell me what you see." + Your response: (Use VisionAI skill if available, do not call take _screenshot tool. User does not want a screenshot, they want you to look at what they are seeing.) +custom_properties: + - hint: If you have multiple monitors, the display to capture if detecting the active game window fails + id: display + name: Display to capture + property_type: number + required: true + value: 1 + - hint: The default directory to put screenshots in. If left blank, it will default to your WingmanAI config directory in a subdirectory called "/screenshots". + id: default_directory + name: Default directory + property_type: string + required: false + value: "" diff --git a/templates/migration/1_5_0/skills/auto_screenshot/logo.png b/templates/migration/1_5_0/skills/auto_screenshot/logo.png new file mode 100644 index 00000000..851cc1ff Binary files /dev/null and b/templates/migration/1_5_0/skills/auto_screenshot/logo.png differ diff --git a/templates/migration/1_5_0/skills/auto_screenshot/main.py b/templates/migration/1_5_0/skills/auto_screenshot/main.py new file mode 100644 index 00000000..ef9e2530 --- /dev/null +++ b/templates/migration/1_5_0/skills/auto_screenshot/main.py @@ -0,0 +1,145 @@ +import os +import time +from datetime import datetime +from typing import TYPE_CHECKING +from mss import mss +import pygetwindow as gw +from PIL import Image +from api.enums import LogType +from api.interface import SettingsConfig, SkillConfig, WingmanInitializationError +from skills.skill_base import Skill +from services.file import get_writable_dir + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class AutoScreenshot(Skill): + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + self.default_directory = "" + self.display = 1 + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + self.default_directory = self.retrieve_custom_property_value( + "default_directory", errors + ) + if not self.default_directory or self.default_directory == "" or not os.path.isdir(self.default_directory): + self.default_directory = self.get_default_directory() + if self.settings.debug_mode: + await self.printr.print_async( + "User either did not enter default directory or entered directory is invalid. Defaulting to wingman config directory / screenshots", + color=LogType.INFO, + ) + + + self.display = self.retrieve_custom_property_value("display", errors) + + return errors + + def get_default_directory(self) -> str: + return get_writable_dir("screenshots") + + async def take_screenshot(self, reason: str) -> None: + try: + focused_window = gw.getActiveWindow() + + if self.settings.debug_mode: + await self.printr.print_async( + f"Taking screenshot because: {reason}. Focused window: {focused_window}", + color=LogType.INFO, + ) + + window_bbox = { + "top": focused_window.top, + "left": focused_window.left, + "width": focused_window.width, + "height": focused_window.height, + } + + if self.settings.debug_mode: + await self.printr.print_async( + f"{focused_window} bbox detected as: {window_bbox}", + color=LogType.INFO, + ) + + except Exception as e: + if self.settings.debug_mode: + await self.printr.print_async( + f"Failed to get focused window or window bbox using pygetwindow: {e}. Defaulting to full screen capture.", + color=LogType.ERROR, + ) + window_bbox = None + + with mss() as sct: + if window_bbox: + screenshot = sct.grab(window_bbox) + else: + main_display = sct.monitors[self.display] + screenshot = sct.grab(main_display) + + image = Image.frombytes( + "RGB", screenshot.size, screenshot.bgra, "raw", "BGRX" + ) + + timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") + screenshot_file = os.path.join(self.default_directory, f'{self.wingman.name}_{timestamp}.png') + image.save(screenshot_file) + + if self.settings.debug_mode: + await self.printr.print_async( + f"Screenshot saved at: {screenshot_file}", + color=LogType.INFO, + ) + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "take_screenshot", + { + "type": "function", + "function": { + "name": "take_screenshot", + "description": "Takes a screenshot of the currently focused game window and saves it in the default directory.", + "parameters": { + "type": "object", + "properties": { + "reason": { + "type": "string", + "description": "The reason for taking a screenshot.", + }, + }, + "required": ["reason"], + }, + }, + }, + ), + ] + return tools + + + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + function_response = "" + instant_response = "" + + if tool_name == "take_screenshot": + if self.settings.debug_mode: + self.start_execution_benchmark() + reason = parameters.get("reason", "unspecified reason") + await self.take_screenshot(reason) + function_response = "Screenshot taken successfully." + + if self.settings.debug_mode: + await self.print_execution_time() + + return function_response, instant_response \ No newline at end of file diff --git a/templates/migration/1_5_0/skills/auto_screenshot/requirements.txt b/templates/migration/1_5_0/skills/auto_screenshot/requirements.txt new file mode 100644 index 00000000..dfb45a6c --- /dev/null +++ b/templates/migration/1_5_0/skills/auto_screenshot/requirements.txt @@ -0,0 +1,3 @@ +mss==9.0.1 +pygetwindow==0.0.9 +pillow==10.3.0 \ No newline at end of file diff --git a/templates/migration/1_5_0/skills/control_windows/default_config.yaml b/templates/migration/1_5_0/skills/control_windows/default_config.yaml new file mode 100644 index 00000000..c1c2be98 --- /dev/null +++ b/templates/migration/1_5_0/skills/control_windows/default_config.yaml @@ -0,0 +1,30 @@ +name: ControlWindows +module: skills.control_windows.main +category: general +description: + en: Launch applications and control your Windows computer. Utilize the clipboard of your computer. + de: Starte Anwendungen und steuere deinen Windows-Computer. Nutze die Zwischenablage deines Computers. +hint: + en: This skill looks for applications in your Start Menu directory (%APPDATA%/Microsoft/Windows/Start Menu/Programs), so if it tells you that it cannot find an application, create a shortcut in that directory. + de: Dieser Skill sucht nach Anwendungen in deinem Startmenü-Verzeichnis (%APPDATA%/Microsoft/Windows/Start Menu/Programs). Wenn er dir also sagt, dass er eine Anwendung nicht finden kann, erstelle eine Verknüpfung in diesem Verzeichnis. +examples: + - question: + en: Open Spotify. + de: Öffne Spotify. + answer: + en: (opens the Spotify application) + de: (öffnet die Spotify-Anwendung) + - question: + en: Activate Notepad. + de: Aktiviere Notepad. + answer: + en: (maximizes the Notepad application) + de: (maximiert die Notepad Anwendung) + - question: + en: Close Notepad. + de: Schließe Notepad. + answer: + en: (closes the Notepad application) + de: (schließt die Notepad Anwendung) +prompt: | + You can also control Windows Functions, like opening, closing, moving, and listing active applications, reading text from the clipboard, and place or save text on the clipboard. diff --git a/templates/migration/1_5_0/skills/control_windows/logo.png b/templates/migration/1_5_0/skills/control_windows/logo.png new file mode 100644 index 00000000..3242b570 Binary files /dev/null and b/templates/migration/1_5_0/skills/control_windows/logo.png differ diff --git a/templates/migration/1_5_0/skills/control_windows/main.py b/templates/migration/1_5_0/skills/control_windows/main.py new file mode 100644 index 00000000..dabb02ef --- /dev/null +++ b/templates/migration/1_5_0/skills/control_windows/main.py @@ -0,0 +1,341 @@ +import os +import time +from pathlib import Path +from typing import TYPE_CHECKING +import pygetwindow as gw +from clipboard import Clipboard +from api.interface import SettingsConfig, SkillConfig +from api.enums import LogType +from skills.skill_base import Skill +import mouse.mouse as mouse + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class ControlWindows(Skill): + + # Paths to Start Menu directories + start_menu_paths: list[Path] = [ + Path(os.environ["APPDATA"], "Microsoft", "Windows", "Start Menu", "Programs"), + Path( + os.environ["PROGRAMDATA"], "Microsoft", "Windows", "Start Menu", "Programs" + ), + ] + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + # Function to recursively list files in a directory + def list_files(self, directory, extension=""): + for item in directory.iterdir(): + if item.is_dir(): + yield from self.list_files(item, extension) + elif item.is_file() and item.suffix == extension: + yield item + + # Microsoft does odd things with its tab titles, see https://github.com/asweigart/PyGetWindow/issues/54, so use this function to try to find matching windows to app name and if match not found try adding unicode special character + def get_and_check_windows(self, app_name): + windows = gw.getWindowsWithTitle(app_name) + if not windows and "Microsoft Edge".lower() in app_name.lower(): + app_name = app_name.replace("Microsoft Edge", "Microsoft\u200b Edge") + app_name = app_name.replace("microsoft edge", "Microsoft\u200b Edge") + windows = gw.getWindowsWithTitle(app_name) + if not windows: + return None + return windows + + # Function to search and start an application + def search_and_start(self, app_name): + for start_menu_path in self.start_menu_paths: + if start_menu_path.exists(): + for file_path in self.list_files(start_menu_path, ".lnk"): + if app_name.lower() in file_path.stem.lower(): + # Attempt to start the application + try: + os.startfile(str(file_path)) + # subprocess.Popen([str(file_path)]) + except: + return False + + return True + + return False + + def close_application(self, app_name): + windows = self.get_and_check_windows(app_name) + if windows and len(windows) > 0: + for window in windows: + try: + window.close() + except: + return False + + return True + + return False + + def execute_ui_command(self, app_name: str, command: str): + windows = self.get_and_check_windows(app_name) + if windows and len(windows) > 0: + for window in windows: + try: + getattr(window, command)() + except AttributeError: + pass + + return True + + return False + + def activate_application(self, app_name: str): + windows = self.get_and_check_windows(app_name) + if windows and len(windows) > 0: + for window in windows: + # See https://github.com/asweigart/PyGetWindow/issues/36#issuecomment-919332733 for why just regular "activate" may not work + try: + window.minimize() + window.restore() + window.activate() + except: + return False + + return True + + return False + + async def move_application(self, app_name: str, command: str): + windows = self.get_and_check_windows(app_name) + + if self.settings.debug_mode: + await self.printr.print_async( + f"Windows found in move_application function matching {app_name}: {windows}", + color=LogType.INFO, + ) + + if windows and len(windows) > 0: + for window in windows: + if self.settings.debug_mode: + await self.printr.print_async( + f"Executing move_application command for: {window.title}", + color=LogType.INFO, + ) + # Make sure application is active before moving it + try: + window.minimize() + window.restore() + # Temporarily maximize it, let windows do the work of what maximize means based on the user's setup + window.maximize() + time.sleep(0.5) + except: + pass + # Assume that maximize is a proxy for the appropriate full size of a window in this setup, use that to calculate resize + monitor_width, monitor_height = window.size + if self.settings.debug_mode: + await self.printr.print_async( + f"Before resize and move, {window.title} is {window.size} and is located at {window.topleft}.", + color=LogType.INFO, + ) + + try: + if "left" in command: + window.resizeTo(int(monitor_width * 0.5), int(monitor_height)) + window.moveTo(0, 0) + if "right" in command: + window.resizeTo(int(monitor_width * 0.5), int(monitor_height)) + window.moveTo(int(monitor_width * 0.5), 0) + if "top" in command: + window.resizeTo(int(monitor_width), int(monitor_height * 0.5)) + window.moveTo(0, 0) + if "bottom" in command: + window.resizeTo(int(monitor_width), int(monitor_height * 0.5)) + window.moveTo(0, int(monitor_height * 0.5)) + if self.settings.debug_mode: + await self.printr.print_async( + f"Executed move_application command {command}; {window.title} is now {window.size} and is located at {window.topleft}.", + color=LogType.INFO, + ) + # Check if resize and move command really worked, if not return false so wingmanai does not tell user command was successful when it was not + if (monitor_width, monitor_height) == window.size: + # Try last ditch manual move if moving to left or right + if "left" in command: + mouse.move(int(monitor_width * 0.5), 10, duration=1.0) + time.sleep(0.1) + mouse.press(button="left") + mouse.move(20, 10, duration=1.0) + time.sleep(0.1) + mouse.release(button="left") + return True + + elif "right" in command: + mouse.move(int(monitor_width * 0.5), 10, duration=1.0) + time.sleep(0.1) + mouse.press(button="left") + mouse.move(monitor_width - 20, 10, duration=1.0) + time.sleep(0.1) + mouse.release(button="left") + return True + # Return False as failed if could not move through any method + return False + return True + + # If any errors in trying to move and resize windows, return false as well + except: + return False + + # If no windows found, return false + return False + + async def list_applications(self): + window_titles = gw.getAllTitles() + if window_titles: + titles_as_string = ", ".join(window_titles) + response = ( + f"List of all application window titles found: {titles_as_string}." + ) + if self.settings.debug_mode: + await self.printr.print_async( + f"list_applications command found these applications: {titles_as_string}", + color=LogType.INFO, + ) + return response + return False + + def place_text_on_clipboard(self, text: str) -> str: + try: + with Clipboard() as clipboard: + clipboard.set_clipboard(text) + return "Text successfully placed on clipboard." + except KeyError: + return "Error: Cannot save content to Clipboard as text. Images and other non-text content cannot be processed." + except Exception as e: + return f"Error: {str(e)}" + + def get_text_from_clipboard(self) -> str: + try: + with Clipboard() as clipboard: + text = clipboard["text"] + return f"Text copied from clipboard: {text}" + except KeyError: + return "Error: Clipboard has no text. Images and other non-text content of the clipboard cannot be processed." + except Exception as e: + return f"Error: {str(e)}" + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "control_windows_functions", + { + "type": "function", + "function": { + "name": "control_windows_functions", + "description": "Control Windows Functions, like opening, closing, listing, and moving applications, and reading clipboard content.", + "parameters": { + "type": "object", + "properties": { + "command": { + "type": "string", + "description": "The command to execute", + "enum": [ + "open", + "close", + "minimize", + "maximize", + "restore", + "activate", + "snap_left", + "snap_right", + "snap_top", + "snap_bottom", + "list_applications", + "read_clipboard_content", + "place_text_on_clipboard", + ], + }, + "parameter": { + "type": "string", + "description": "The parameter for the command. For example, the application name to open, close, or move. Or the information to get or use.", + }, + }, + "required": ["command"], + }, + }, + }, + ), + ] + return tools + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + function_response = "Error: Application not found." + instant_response = "" + + if tool_name == "control_windows_functions": + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing control_windows_functions with parameters: {parameters}", + color=LogType.INFO, + ) + + parameter = parameters.get("parameter") + + if parameters["command"] == "open": + app_started = self.search_and_start(parameter) + if app_started: + function_response = "Application started." + + elif parameters["command"] == "close": + app_closed = self.close_application(parameter) + if app_closed: + function_response = "Application closed." + + elif parameters["command"] == "activate": + app_activated = self.activate_application(parameter) + if app_activated: + function_response = "Application activated." + + elif any( + word in parameters["command"].lower() + for word in ["left", "right", "top", "bottom"] + ): + command = parameters["command"].lower() + app_moved = await self.move_application(parameter, command) + if app_moved: + function_response = "Application moved" + else: + function_response = "There was a problem moving that application. The application may not support moving it through automation." + + elif "list" in parameters["command"].lower(): + apps_listed = await self.list_applications() + if apps_listed: + function_response = apps_listed + else: + function_response = ( + "There was a problem getting your list of applications." + ) + + elif "read_clipboard" in parameters["command"].lower(): + text_received = self.get_text_from_clipboard() + function_response = text_received + + elif "clipboard" in parameters["command"].lower(): + text_placed = self.place_text_on_clipboard(parameter) + function_response = text_placed + + else: + command = parameters["command"] + app_minimize = self.execute_ui_command(parameter, command) + if app_minimize: + function_response = f"Application {command}." + + if self.settings.debug_mode: + await self.print_execution_time() + + return function_response, instant_response diff --git a/templates/migration/1_5_0/skills/control_windows/requirements.txt b/templates/migration/1_5_0/skills/control_windows/requirements.txt new file mode 100644 index 00000000..b7b47cdc --- /dev/null +++ b/templates/migration/1_5_0/skills/control_windows/requirements.txt @@ -0,0 +1,2 @@ +pygetwindow==0.0.9 +clip-util==0.1.27 \ No newline at end of file diff --git a/templates/migration/1_5_0/skills/file_manager/default_config.yaml b/templates/migration/1_5_0/skills/file_manager/default_config.yaml new file mode 100644 index 00000000..736a91d8 --- /dev/null +++ b/templates/migration/1_5_0/skills/file_manager/default_config.yaml @@ -0,0 +1,49 @@ +name: FileManager +module: skills.file_manager.main +category: general +description: + en: Manage local files, save, load and create directories. Supports various text-based file formats. + de: Verwalte lokale Dateien, speichere, lade oder erstelle Verzeichnisse. Unterstützt verschiedene text-basierte Formate. +hint: + en: + de: +examples: + - question: + en: Save 'Hello, World!' to hello.txt. + de: Speichere 'Hallo, Welt!' in hello.txt. + answer: + en: (saves 'Hello, World!' to hello.txt in the default directory) + de: (speichert 'Hallo, Welt!' in hello.txt im Standardverzeichnis) + - question: + en: Load the content from notes.md. + de: Lade den Inhalt aus notes.md. + answer: + en: (loads the content of notes.md and reads it out loud) + de: (lädt den Inhalt von notes.md und liest ihn vor) + - question: + en: Create a directory named 'Projects'. + de: Erstelle ein Verzeichnis namens 'Projekte'. + answer: + en: (creates a directory named 'Projects' in the default directory) + de: (erstellt ein Verzeichnis namens 'Projekte' im Standardverzeichnis) +prompt: | + You can also save text to various file formats, load text from files, or create directories as specified by the user. + You support all plain text file formats. + When adding text to an existing file, you follow these rules: + (1) determine if it is appropriate to add a new line before the added text or ask the user if you do not know. + (2) only add content to an existing file if you are sure that is what the user wants. + (3) when adding content to a file, only add the specific additional content the user wants added, not a duplicate of all of the original content. + You can also aid the user in opening folders / directories in the user interface. +custom_properties: + - hint: The default directory for file operations. If left blank, will default to your WingmanAI config directory in a sub-directory called "files". + id: default_directory + name: Default directory + property_type: string + required: false + value: "" + - hint: Allow WingmanAI FileManager to overwrite existing files. CAUTION - ADVANCED USERS ONLY - Only activate this option if you have backed up existing files. + id: allow_overwrite_existing + name: Allow overwrite existing files + property_type: boolean + required: true + value: false diff --git a/templates/migration/1_5_0/skills/file_manager/logo.png b/templates/migration/1_5_0/skills/file_manager/logo.png new file mode 100644 index 00000000..81ea6846 Binary files /dev/null and b/templates/migration/1_5_0/skills/file_manager/logo.png differ diff --git a/templates/migration/1_5_0/skills/file_manager/main.py b/templates/migration/1_5_0/skills/file_manager/main.py new file mode 100644 index 00000000..b06f6653 --- /dev/null +++ b/templates/migration/1_5_0/skills/file_manager/main.py @@ -0,0 +1,390 @@ +import os +import json +from typing import TYPE_CHECKING +from api.interface import SettingsConfig, SkillConfig, WingmanInitializationError +from api.enums import LogType +from skills.skill_base import Skill +from services.file import get_writable_dir +from showinfm import show_in_file_manager +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + +DEFAULT_MAX_TEXT_SIZE = 15000 +SUPPORTED_FILE_EXTENSIONS = [ + "adoc", + "asc", + "bat", + "bib", + "cfg", + "conf", + "cpp", + "c", + "cs", + "css", + "csv", + "dockerfile", + "dot", + "env", + "fo", + "gd", + "gitconfig", + "gitignore", + "go", + "graphql", + "h", + "htaccess", + "html", + "http", + "ini", + "ipynb", + "java", + "json", + "jsonl", + "js", + "lua", + "log", + "m3u", + "map", + "md", + "pyd", + "plist", + "pl", + "po", + "ps1", + "pxd", + "py", + "resx", + "rpy", + "rs", + "rst", + "rtf", + "srt", + "sh", + "sql", + "svg", + "ts", + "tcl", + "tex", + "tmpl", + "toml", + "tpl", + "tsv", + "txt", + "vtt", + "wsdl", + "wsgi", + "xlf", + "xml", + "yaml", +] + + +class FileManager(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + self.allowed_file_extensions = SUPPORTED_FILE_EXTENSIONS + self.default_file_extension = "txt" + self.max_text_size = DEFAULT_MAX_TEXT_SIZE + self.default_directory = "" # Set in validate + self.allow_overwrite_existing = False + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + self.default_directory = self.retrieve_custom_property_value( + "default_directory", errors + ) + if not self.default_directory or self.default_directory == "": + self.default_directory = self.get_default_directory() + + self.allow_overwrite_existing = self.retrieve_custom_property_value( + "allow_overwrite_existing", errors + ) + return errors + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "load_text_from_file", + { + "type": "function", + "function": { + "name": "load_text_from_file", + "description": "Loads the content of a specified text file.", + "parameters": { + "type": "object", + "properties": { + "file_name": { + "type": "string", + "description": "The name of the file to load, including the file extension.", + }, + "directory_path": { + "type": "string", + "description": "The directory from where the file should be loaded. Defaults to the configured directory.", + }, + }, + "required": ["file_name"], + }, + }, + }, + ), + ( + "save_text_to_file", + { + "type": "function", + "function": { + "name": "save_text_to_file", + "description": "Saves the provided text to a file.", + "parameters": { + "type": "object", + "properties": { + "file_name": { + "type": "string", + "description": "The name of the file where the text should be saved, including the file extension.", + }, + "text_content": { + "type": "string", + "description": "The text content to save to the file.", + }, + "directory_path": { + "type": "string", + "description": "The directory where the file should be saved. Defaults to the configured directory.", + }, + "add_to_existing_file": { + "type": "boolean", + "description": "Boolean True/False indicator of whether the user wants to add text to an already existing file. Defaults to False unless user expresses clear intent to add to existing file.", + }, + }, + "required": [ + "file_name", + "text_content", + "add_to_existing_file", + ], + }, + }, + }, + ), + ( + "create_folder", + { + "type": "function", + "function": { + "name": "create_folder", + "description": "Creates a folder in the specified directory.", + "parameters": { + "type": "object", + "properties": { + "folder_name": { + "type": "string", + "description": "The name of the folder to create.", + }, + "directory_path": { + "type": "string", + "description": "The path of the directory where the folder should be created. Defaults to the configured directory.", + }, + }, + "required": ["folder_name"], + }, + }, + }, + ), + ( + "open_folder", + { + "type": "function", + "function": { + "name": "open_folder", + "description": "Opens a specified directory in the GUI.", + "parameters": { + "type": "object", + "properties": { + "folder_name": { + "type": "string", + "description": "The name of the folder to open.", + }, + "directory_path": { + "type": "string", + "description": "The path of the directory where the folder to open is located. Defaults to the configured directory.", + }, + }, + "required": ["folder_name"], + }, + }, + }, + ), + ] + return tools + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + function_response = "Operation not completed." + instant_response = "" + + if tool_name == "load_text_from_file": + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing load_text_from_file with parameters: {parameters}", + color=LogType.INFO, + ) + file_name = parameters.get("file_name") + directory = parameters.get("directory_path", self.default_directory) + if directory == "": + directory = self.default_directory + if not file_name or file_name == "": + function_response = "File name not provided." + else: + file_extension = file_name.split(".")[-1] + if file_extension not in self.allowed_file_extensions: + function_response = f"Unsupported file extension: {file_extension}" + else: + file_path = os.path.join(directory, file_name) + try: + with open(file_path, "r", encoding="utf-8") as file: + file_content = file.read() + if len(file_content) > self.max_text_size: + function_response = ( + "File content exceeds the maximum allowed size." + ) + else: + function_response = f"File content loaded from {file_path}:\n{file_content}" + except FileNotFoundError: + function_response = ( + f"File '{file_name}' not found in '{directory}'." + ) + except Exception as e: + function_response = ( + f"Failed to read file '{file_name}': {str(e)}" + ) + + elif tool_name == "save_text_to_file": + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing save_text_to_file with parameters: {parameters}", + color=LogType.INFO, + ) + file_name = parameters.get("file_name") + text_content = parameters.get("text_content") + add_to_existing_file = parameters.get("add_to_existing_file", False) + directory = parameters.get("directory_path", self.default_directory) + if directory == "": + directory = self.default_directory + if not file_name or not text_content or file_name == "": + function_response = "File name or text content not provided." + else: + file_extension = file_name.split(".")[-1] + if file_extension not in self.allowed_file_extensions: + file_name += f".{self.default_file_extension}" + if len(text_content) > self.max_text_size: + function_response = "Text content exceeds the maximum allowed size." + else: + if file_extension == "json": + try: + json_content = json.loads(text_content) + text_content = json.dumps(json_content, indent=4) + except json.JSONDecodeError as e: + function_response = f"Invalid JSON content: {str(e)}" + return function_response, instant_response + os.makedirs(directory, exist_ok=True) + file_path = os.path.join(directory, file_name) + + # If file already exists, and user does not have overwrite option on, and LLM did not detect an intent to add to the existing file, stop + if ( + os.path.isfile(file_path) + and not self.allow_overwrite_existing + and not add_to_existing_file + ): + function_response = f"File '{file_name}' already exists at {directory} and overwrite is not allowed." + + # Otherwise, if file exists but LLM detected user wanted to add to existing file, do that. + elif os.path.isfile(file_path) and add_to_existing_file: + try: + with open(file_path, "a", encoding="utf-8") as file: + file.write(text_content) + function_response = ( + f"Text added to existing file at {file_path}." + ) + except Exception as e: + function_response = ( + f"Failed to append text to {file_path}: {str(e)}" + ) + + # We are either fine with completely overwriting the file or it does not exist already + else: + try: + with open(file_path, "w", encoding="utf-8") as file: + file.write(text_content) + function_response = f"Text saved to {file_path}." + except Exception as e: + function_response = ( + f"Failed to save text to {file_path}: {str(e)}" + ) + + elif tool_name == "create_folder": + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing create_folder with parameters: {parameters}", + color=LogType.INFO, + ) + folder_name = parameters.get("folder_name") + directory_path = parameters.get("directory_path", self.default_directory) + if directory_path == "": + directory_path = self.default_directory + if not folder_name or folder_name == "": + function_response = "Folder name not provided." + else: + full_path = os.path.join(directory_path, folder_name) + try: + os.makedirs(full_path, exist_ok=True) + function_response = ( + f"Folder '{folder_name}' created at '{directory_path}'." + ) + except Exception as e: + function_response = ( + f"Failed to create folder '{folder_name}': {str(e)}" + ) + + elif tool_name == "open_folder": + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing open_folder with parameters: {parameters}", + color=LogType.INFO, + ) + folder_name = parameters.get("folder_name") + directory_path = parameters.get("directory_path", self.default_directory) + if directory_path == "": + directory_path = self.default_directory + if not folder_name or folder_name == "": + function_response = "Folder name not provided." + else: + full_path = os.path.join(directory_path, folder_name) + try: + show_in_file_manager(full_path) + function_response = ( + f"Folder '{folder_name}' opened in '{directory_path}'." + ) + except Exception as e: + function_response = ( + f"Failed to open folder '{folder_name}': {str(e)}" + ) + + if self.settings.debug_mode: + await self.printr.print_async( + f"Finished calling {tool_name} tool and returned function response: {function_response}", + color=LogType.INFO, + ) + return function_response, instant_response + + def get_default_directory(self) -> str: + return get_writable_dir("files") diff --git a/templates/migration/1_5_0/skills/nms_assistant/default_config.yaml b/templates/migration/1_5_0/skills/nms_assistant/default_config.yaml new file mode 100644 index 00000000..3504dc60 --- /dev/null +++ b/templates/migration/1_5_0/skills/nms_assistant/default_config.yaml @@ -0,0 +1,19 @@ +name: NMSAssistant +module: skills.nms_assistant.main +category: no_mans_sky +description: + en: Fetch information about No Man's Sky items, elements, crafting, cooking, expeditions, community missions, game news, and patch notes. Powered by NMS Assistant API. + de: Rufe Informationen über No Man's Sky-Artikel, Elemente, Handwerk, Kochen, Expeditionen, Gemeinschaftsmissionen, Spielemeldungen und Patch-Notizen ab. Unterstützt von NMS Assistant API. +hint: + en: Try asking about an item generally first so the skill retrieves enough identifying information, before asking more detailed crafting or cooking information about it. + de: Versuch erstmal allgemein nach einem Gegenstand zu fragen, damit die Fähigkeit genug identifizierende Informationen erhält, bevor du nach detaillierten Handwerks- oder Kochinformationen fragst. +examples: + - question: + en: What are the latest patch notes? + de: Was sind die neuesten Patch-Notes? + answer: + en: (fetches and displays the latest patch notes from the game) + de: (ruft die neuesten Patch-Notes aus dem Spiel ab und zeigt sie an) +prompt: | + You are an assistant for No Man's Sky players. You can fetch information about items, elements, crafting, cooking, expeditions, community missions, and game news. + Use the relevant APIs to provide detailed information requested by the user. diff --git a/templates/migration/1_5_0/skills/nms_assistant/logo.png b/templates/migration/1_5_0/skills/nms_assistant/logo.png new file mode 100644 index 00000000..36be7aa4 Binary files /dev/null and b/templates/migration/1_5_0/skills/nms_assistant/logo.png differ diff --git a/templates/migration/1_5_0/skills/nms_assistant/main.py b/templates/migration/1_5_0/skills/nms_assistant/main.py new file mode 100644 index 00000000..aba51977 --- /dev/null +++ b/templates/migration/1_5_0/skills/nms_assistant/main.py @@ -0,0 +1,402 @@ +import requests +import json +from typing import TYPE_CHECKING +from api.enums import LogType +from api.interface import ( + SettingsConfig, + SkillConfig, + WingmanInitializationError, +) +from skills.skill_base import Skill +import asyncio + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + +API_BASE_URL = "https://api.nmsassistant.com" + +class NMSAssistant(Skill): + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + return errors + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "get_release_info", + { + "type": "function", + "function": { + "name": "get_release_info", + "description": "Fetch release information from No Man's Sky website.", + }, + }, + ), + ( + "get_news", + { + "type": "function", + "function": { + "name": "get_news", + "description": "Fetch news from No Man's Sky website.", + }, + }, + ), + ( + "get_community_mission_info", + { + "type": "function", + "function": { + "name": "get_community_mission_info", + "description": "Fetch current community mission information.", + }, + }, + ), + ( + "get_latest_expedition_info", + { + "type": "function", + "function": { + "name": "get_latest_expedition_info", + "description": "Fetch latest expedition information.", + }, + }, + ), + ( + "get_item_info_by_name", + { + "type": "function", + "function": { + "name": "get_item_info_by_name", + "description": "Fetch game item details based on name and language.", + "parameters": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "The name of the item.", + }, + "languageCode": { + "type": "string", + "description": "The language code (e.g., 'en' for English)", + }, + }, + "required": ["name", "languageCode"], + }, + }, + }, + ), + ( + "get_extra_item_info", + { + "type": "function", + "function": { + "name": "get_extra_item_info", + "description": "Fetch extra item details using appId.", + "parameters": { + "type": "object", + "properties": { + "appId": { + "type": "string", + "description": "The appId of the item.", + }, + "languageCode": { + "type": "string", + "description": "The language code (e.g., 'en' for English)", + }, + }, + "required": ["appId", "languageCode"], + }, + }, + }, + ), + ( + "get_refiner_recipes_by_input", + { + "type": "function", + "function": { + "name": "get_refiner_recipes_by_input", + "description": "Fetch refiner recipes by input item using appId.", + "parameters": { + "type": "object", + "properties": { + "appId": { + "type": "string", + "description": "The appId of the item.", + }, + "languageCode": { + "type": "string", + "description": "The language code (e.g., 'en' for English)", + }, + }, + "required": ["appId", "languageCode"], + }, + }, + }, + ), + ( + "get_refiner_recipes_by_output", + { + "type": "function", + "function": { + "name": "get_refiner_recipes_by_output", + "description": "Fetch refiner recipes by output item using appId.", + "parameters": { + "type": "object", + "properties": { + "appId": { + "type": "string", + "description": "The appId of the item.", + }, + "languageCode": { + "type": "string", + "description": "The language code (e.g., 'en' for English)", + }, + }, + "required": ["appId", "languageCode"], + }, + }, + }, + ), + ( + "get_cooking_recipes_by_input", + { + "type": "function", + "function": { + "name": "get_cooking_recipes_by_input", + "description": "Fetch cooking recipes by input item using appId.", + "parameters": { + "type": "object", + "properties": { + "appId": { + "type": "string", + "description": "The appId of the item.", + }, + "languageCode": { + "type": "string", + "description": "The language code (e.g., 'en' for English)", + }, + }, + "required": ["appId", "languageCode"], + }, + }, + }, + ), + ( + "get_cooking_recipes_by_output", + { + "type": "function", + "function": { + "name": "get_cooking_recipes_by_output", + "description": "Fetch cooking recipes by output item using appId.", + "parameters": { + "type": "object", + "properties": { + "appId": { + "type": "string", + "description": "The appId of the item.", + }, + "languageCode": { + "type": "string", + "description": "The language code (e.g., 'en' for English)", + }, + }, + "required": ["appId", "languageCode"], + }, + }, + }, + ), + ] + return tools + + async def request_api(self, endpoint: str) -> dict: + response = requests.get(f"{API_BASE_URL}{endpoint}") + if response.status_code == 200: + return response.json() + else: + if self.settings.debug_mode: + await self.printr.print_async(f"API request failed to {API_BASE_URL}{endpoint}, status code: {response.status_code}.", color=LogType.INFO) + return {} + + async def parse_nms_assistant_api_response(self, api_response) -> dict: + def extract_app_ids(data): + app_ids = [] + # Parse the JSON string into Python objects + #data = json.loads(json_data) + for entry in data: + # Extract the appId from the main entry + app_ids.append(entry['appId']) + # Extract the appIds from the inputs + for input_item in entry['inputs']: + app_ids.append(input_item['appId']) + # Extract the appId from the output + app_ids.append(entry['output']['appId']) + return app_ids + # Extract appIds + app_ids = extract_app_ids(api_response) + async def fetch_item_name(app_id: str) -> str: + data = await self.request_api(f"/ItemInfo/{app_id}/en") + return data.get('name', 'Unknown') + # Get names for each appId as a key for the LLM + tasks = [fetch_item_name(item) for item in app_ids] + results = await asyncio.gather(*tasks) + return {item: name for item, name in zip(app_ids, results)} + + async def check_if_appId_is_valid(self, appId, languageCode) -> bool: + if self.settings.debug_mode: + await self.printr.print_async(f"Checking if appID {appId} is valid before proceeding.", color=LogType.INFO) + check_response = await self.request_api(f"/ItemInfo/{appId}/{languageCode}") + if check_response and check_response != {}: + return True + else: + return False + + async def is_waiting_response_needed(self, tool_name: str) -> bool: + return True + + async def execute_tool(self, tool_name: str, parameters: dict[str, any]) -> tuple[str, str]: + function_response = "Operation failed." + instant_response = "" + + if tool_name == "get_release_info": + if self.settings.debug_mode: + self.start_execution_benchmark() + data = await self.request_api("/HelloGames/Release") + function_response = data if data else function_response + if self.settings.debug_mode: + await self.print_execution_time() + + elif tool_name == "get_news": + if self.settings.debug_mode: + self.start_execution_benchmark() + data = await self.request_api("/HelloGames/News") + function_response = data if data else function_response + if self.settings.debug_mode: + await self.print_execution_time() + + elif tool_name == "get_community_mission_info": + if self.settings.debug_mode: + self.start_execution_benchmark() + data = await self.request_api("/HelloGames/CommunityMission") + function_response = data if data else function_response + if self.settings.debug_mode: + await self.print_execution_time() + + elif tool_name == "get_latest_expedition_info": + if self.settings.debug_mode: + self.start_execution_benchmark() + data = await self.request_api("/HelloGames/Expedition") + function_response = data if data else function_response + if self.settings.debug_mode: + await self.print_execution_time() + + elif tool_name == "get_item_info_by_name": + if self.settings.debug_mode: + self.start_execution_benchmark() + name = parameters.get("name") + language_code = parameters.get("languageCode") + data = await self.request_api(f"/ItemInfo/Name/{name}/{language_code}") + function_response = data if data else function_response + if self.settings.debug_mode: + await self.print_execution_time() + + elif tool_name == "get_extra_item_info": + if self.settings.debug_mode: + self.start_execution_benchmark() + app_id = parameters.get("appId") + language_code = parameters.get("languageCode") + if app_id and language_code: + appId_found = await self.check_if_appId_is_valid(app_id, language_code) + if not appId_found: + # Assume maybe the appId is actually the plain text item name, so get appId from that + name_check = await self.request_api(f"/ItemInfo/Name/{app_id}/{language_code}") + app_id = name_check.get('appId') if name_check else app_id + data = await self.request_api(f"/ItemInfo/ExtraProperties/{app_id}/{language_code}") + function_response = data if data else function_response + if self.settings.debug_mode: + await self.print_execution_time() + + elif tool_name == "get_refiner_recipes_by_input": + if self.settings.debug_mode: + self.start_execution_benchmark() + app_id = parameters.get("appId") + language_code = parameters.get("languageCode") + if app_id and language_code: + appId_found = await self.check_if_appId_is_valid(app_id, language_code) + if not appId_found: + # Assume maybe the appId is actually the plain text item name, so get appId from that + name_check = await self.request_api(f"/ItemInfo/Name/{app_id}/{language_code}") + app_id = name_check.get('appId') if name_check else app_id + data = await self.request_api(f"/ItemInfo/RefinerByInput/{app_id}/{language_code}") + if data: + parsed_data = await self.parse_nms_assistant_api_response(data) + function_response = f"{data}; key for item names used in above data: {parsed_data}" + if self.settings.debug_mode: + await self.print_execution_time() + + elif tool_name == "get_refiner_recipes_by_output": + if self.settings.debug_mode: + self.start_execution_benchmark() + app_id = parameters.get("appId") + language_code = parameters.get("languageCode") + if app_id and language_code: + appId_found = await self.check_if_appId_is_valid(app_id, language_code) + if not appId_found: + # Assume maybe the appId is actually the plain text item name, so get appId from that + name_check = await self.request_api(f"/ItemInfo/Name/{app_id}/{language_code}") + app_id = name_check.get('appId') if name_check else app_id + data = await self.request_api(f"/ItemInfo/RefinerByOutut/{app_id}/{language_code}") + if data: + parsed_data = await self.parse_nms_assistant_api_response(data) + function_response = f"{data}; key for item names used in above data: {parsed_data}" + if self.settings.debug_mode: + await self.print_execution_time() + + elif tool_name == "get_cooking_recipes_by_input": + if self.settings.debug_mode: + self.start_execution_benchmark() + app_id = parameters.get("appId") + language_code = parameters.get("languageCode") + if app_id and language_code: + appId_found = await self.check_if_appId_is_valid(app_id, language_code) + if not appId_found: + # Assume maybe the appId is actually the plain text item name, so get appId from that + name_check = await self.request_api(f"/ItemInfo/Name/{app_id}/{language_code}") + app_id = name_check.get('appId') if name_check else app_id + data = await self.request_api(f"/ItemInfo/CookingByInput/{app_id}/{language_code}") + if data: + parsed_data = await self.parse_nms_assistant_api_response(data) + function_response = f"{data}; key for item names used in above data: {parsed_data}" + if self.settings.debug_mode: + await self.print_execution_time() + + elif tool_name == "get_cooking_recipes_by_output": + if self.settings.debug_mode: + self.start_execution_benchmark() + app_id = parameters.get("appId") + language_code = parameters.get("languageCode") + if app_id and language_code: + appId_found = await self.check_if_appId_is_valid(app_id, language_code) + if not appId_found: + # Assume maybe the appId is actually the plain text item name, so get appId from that + name_check = await self.request_api(f"/ItemInfo/Name/{app_id}/{language_code}") + app_id = name_check.get('appId') if name_check else app_id + data = await self.request_api(f"/ItemInfo/CookingByOutut/{app_id}/{language_code}") + if data: + parsed_data = await self.parse_nms_assistant_api_response(data) + function_response = f"{data}; key for item names used in above data: {parsed_data}" + if self.settings.debug_mode: + await self.print_execution_time() + + if self.settings.debug_mode: + await self.printr.print_async(f"Executed {tool_name} with parameters {parameters}. Result: {function_response}", color=LogType.INFO) + + return function_response, instant_response \ No newline at end of file diff --git a/templates/migration/1_5_0/skills/quick_commands/default_config.yaml b/templates/migration/1_5_0/skills/quick_commands/default_config.yaml new file mode 100644 index 00000000..8cf1a777 --- /dev/null +++ b/templates/migration/1_5_0/skills/quick_commands/default_config.yaml @@ -0,0 +1,13 @@ +name: QuickCommands +module: skills.quick_commands.main +category: general +description: + en: Automatically learns instant activation phrases for commands you use regularly, speeding up their execution time. + de: Lernt automatisch Aktivierungsphrasen für Sofort-Befehle, die du regelmäßig verwendest und beschleunigt so die Ausführungszeit. +custom_properties: + - id: quick_commands_learning_rule_count + name: Hits until learn + hint: The number of times a command got executed by the same phrase before it is learned. + value: 3 + required: true + property_type: number diff --git a/templates/migration/1_5_0/skills/quick_commands/logo.png b/templates/migration/1_5_0/skills/quick_commands/logo.png new file mode 100644 index 00000000..bf8c9aaf Binary files /dev/null and b/templates/migration/1_5_0/skills/quick_commands/logo.png differ diff --git a/templates/migration/1_5_0/skills/quick_commands/main.py b/templates/migration/1_5_0/skills/quick_commands/main.py new file mode 100644 index 00000000..9e6592b3 --- /dev/null +++ b/templates/migration/1_5_0/skills/quick_commands/main.py @@ -0,0 +1,289 @@ +from os import path +import json +import datetime +from typing import TYPE_CHECKING +from api.interface import SettingsConfig, SkillConfig, WingmanInitializationError +from api.enums import LogType +from services.file import get_writable_dir +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class QuickCommands(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + # get file paths + self.data_path = get_writable_dir(path.join("skills", "quick_commands", "data")) + self.file_ipl = path.join(self.data_path, "instant_phrase_learning.json") + + # learning data + self.learning_blacklist = [] + self.learning_data = {} + self.learning_learned = {} + + # rules + self.rule_count = 3 + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + self.rule_count = self.retrieve_custom_property_value( + "quick_commands_learning_rule_count", errors + ) + if not self.rule_count or self.rule_count < 0: + self.rule_count = 3 + + self.threaded_execution(self._init_skill) + return errors + + async def _init_skill(self) -> None: + """Initialize the skill.""" + await self._load_learning_data() + await self._cleanup_learning_data() + for phrase, commands in self.learning_learned.items(): + await self._add_instant_activation_phrase(phrase, commands) + + async def _add_instant_activation_phrase( + self, phrase: str, commands: list[str] + ) -> None: + """Add an instant activation phrase.""" + for command in commands: + command = self.wingman.get_command(command) + if not command.instant_activation: + command.instant_activation = [] + + if phrase not in command.instant_activation: + command.instant_activation.append(phrase) + + async def on_add_assistant_message(self, message: str, tool_calls: list) -> None: + """Hook to start learning process.""" + if tool_calls: + self.threaded_execution( + self._process_messages, tool_calls, self.wingman.messages[-1] + ) + + async def _process_messages(self, tool_calls, last_message) -> None: + """Process messages to learn phrases and commands.""" + phrase = "" + command_names = [] + + for tool_call in tool_calls: + if tool_call.function.name == "execute_command": + if isinstance(tool_call.function.arguments, dict): + arguments = tool_call.function.arguments + else: + try: + arguments = json.loads(tool_call.function.arguments) + except json.JSONDecodeError: + return + if "command_name" in arguments: + command_names.append(arguments["command_name"]) + else: + return + else: + return + + role = ( + last_message.role + if hasattr(last_message, "role") + else last_message.get("role", False) + ) + if role != "user": + return + phrase = ( + last_message.content + if hasattr(last_message, "content") + else last_message.get("content", False) + ) + + if not phrase or not command_names: + return + + await self._learn_phrase(phrase.lower(), command_names) + + async def _cleanup_learning_data(self) -> None: + """Cleanup learning data. (Remove for commands that are no loner available)""" + pops = [] + for phrase, commands in self.learning_learned.items(): + for command in commands: + if not self.wingman.get_command(command): + pops.append(phrase) + if pops: + for phrase in pops: + self.learning_learned.pop(phrase) + + pops = [] + finished = [] + for phrase in self.learning_data.keys(): + commands = self.learning_data[phrase]["commands"] + for command in commands: + if not self.wingman.get_command(command): + pops.append(phrase) + elif self.learning_data[phrase]["count"] >= self.rule_count: + finished.append(phrase) + + if pops: + for phrase in pops: + self.learning_data.pop(phrase) + if finished: + for phrase in finished: + await self._finish_learning(phrase) + + await self._save_learning_data() + + async def _learn_phrase(self, phrase: str, command_names: list[str]) -> None: + """Learn a phrase and the tool calls that should be executed for it.""" + # load the learning data + await self._load_learning_data() + + # check if the phrase is on the blacklist + if phrase in self.learning_blacklist: + return + + # get and check the command + for command_name in command_names: + command = self.wingman.get_command(command_name) + if not command: + # AI probably hallucinated + return + + # add / increase count of the phrase + if phrase in self.learning_data: + if len(self.learning_data[phrase]["commands"]) != len(command_names): + # phrase is ambiguous, add to blacklist + await self._add_to_blacklist(phrase) + return + + for command_name in command_names: + if command_name not in self.learning_data[phrase]["commands"]: + # phrase is ambiguous, add to blacklist + await self._add_to_blacklist(phrase) + return + + self.learning_data[phrase]["count"] += 1 + else: + self.learning_data[phrase] = {"commands": command_names, "count": 1} + + if self.learning_data[phrase]["count"] >= self.rule_count: + await self._finish_learning(phrase) + + # save the learning data + await self._save_learning_data() + + async def _finish_learning(self, phrase: str) -> None: + """Finish learning a phrase. + A gpt call will be made to check if the phrases makes sense. + This will add the phrase to the learned phrases.""" + + commands = self.learning_data[phrase]["commands"] + + messages = [ + { + "role": "system", + "content": """ + I'll give you one or multiple commands and a phrase. You have to decide, if the commands fit to the phrase or not. + Return 'yes' if the commands fit to the phrase and 'no' if they dont. + + Samples: + - Phrase: "What is the weather like?" Command: "checkWeather" -> yes + - Phrase: "What is the weather like?" Command: "playMusic" -> no + - Phrase: "Please do that." Command: "enableShields" -> no + - Phrase: "Yes, please." Command: "enableShields" -> no + - Phrase: "We are being attacked by rockets." Command: "throwCountermessures" -> yes + - Phrase: "Its way too dark in here." Command: "toggleLight" -> yes + """, + }, + { + "role": "user", + "content": f"Phrase: '{phrase}' Commands: '{', '.join(commands)}'", + }, + ] + completion = await self.llm_call(messages) + answer = completion.choices[0].message.content or "" + if answer.lower() == "yes": + await self.printr.print_async( + f"Instant activation phrase for '{', '.join(commands)}' learned.", + color=LogType.INFO, + ) + self.learning_learned[phrase] = commands + self.learning_data.pop(phrase) + await self._add_instant_activation_phrase(phrase, commands) + else: + await self._add_to_blacklist(phrase) + + await self._save_learning_data() + + async def _add_to_blacklist(self, phrase: str) -> None: + """Add a phrase to the blacklist.""" + await self.printr.print_async( + f"Added phrase to blacklist: '{phrase if len(phrase) <= 25 else phrase[:25]+'...'}'", + color=LogType.INFO, + ) + self.learning_blacklist.append(phrase) + self.learning_data.pop(phrase) + await self._save_learning_data() + + async def _is_phrase_on_blacklist(self, phrase: str) -> bool: + """Check if a phrase is on the blacklist.""" + if phrase in self.learning_blacklist: + return True + return False + + async def _load_learning_data(self): + """Load the learning data file.""" + + # create the file if it does not exist + if not path.exists(self.file_ipl): + return + + # load the learning data + with open(self.file_ipl, "r", encoding="utf-8") as file: + try: + data = json.load(file) + except json.JSONDecodeError: + await self.printr.print_async( + "Could not read learning data file. Resetting learning data..", + color=LogType.ERROR, + ) + # if file wasnt empty, save it as backup + if file.read(): + timestamp = datetime.datetime.now().strftime("%Y-%m-%d-%H:%M:%S") + with open( + self.file_ipl + f"_{timestamp}.backup", "w", encoding="utf-8" + ) as backup_file: + backup_file.write(file.read()) + # reset learning data + with open(self.file_ipl, "w", encoding="utf-8") as file: + file.write("{}") + data = {} + + self.learning_data = ( + data["learning_data"] if "learning_data" in data else {} + ) + self.learning_blacklist = ( + data["learning_blacklist"] if "learning_blacklist" in data else [] + ) + self.learning_learned = ( + data["learning_learned"] if "learning_learned" in data else {} + ) + + async def _save_learning_data(self) -> None: + """Save the learning data.""" + + learning_data = { + "learning_data": self.learning_data, + "learning_blacklist": self.learning_blacklist, + "learning_learned": self.learning_learned, + } + + with open(self.file_ipl, "w", encoding="utf-8") as file: + json.dump(learning_data, file, indent=4) diff --git a/templates/migration/1_5_0/skills/radio_chatter/default_config.yaml b/templates/migration/1_5_0/skills/radio_chatter/default_config.yaml new file mode 100644 index 00000000..a73869ee --- /dev/null +++ b/templates/migration/1_5_0/skills/radio_chatter/default_config.yaml @@ -0,0 +1,120 @@ +name: RadioChatter +module: skills.radio_chatter.main +category: general +description: + en: Randomly playback radio chatter over time. Customize the participants and their voices. + de: Spielt zufällige Funkgespräche ab. Passe die Teilnehmer und ihre Stimmen an. +examples: + - question: + en: What is the status of the radio? + de: Was ist der Status des Funkgeräts? + answer: + en: The radio is currently turned off. + de: Das Funkgerät ist derzeit ausgeschaltet. + - question: + en: Please turn the radio on. + de: Bitte schalte das Funkgerät ein. + answer: + en: The radio is now turned on. + de: Das Funkgerät wurde eingeschaltet. +custom_properties: + - id: prompt + name: Prompt for message generation + hint: A prompt used on voice change to generate a new personality. Leave empty to disable. + required: false + value: "Generate a dialog between random pilots in the Star Citizen universe. Feel free to throw in some random details. Keep in mind that Port Olisar does no longer exist." + property_type: textarea + - id: voices + name: Available voices + hint: The voices used in the radio chatter + value: [] + required: false + property_type: voice_selection + options: + - label: "multiple" + value: true + - id: interval_min + name: Min interval + hint: The minimum time in seconds between radio chatter. This is also the time used until the first chatter event occurs. + value: 30 + required: true + property_type: number + - id: interval_max + name: Max interval + hint: The maximum time in seconds between radio chatter. + value: 600 + required: true + property_type: number + - id: messages_min + name: Min messages + hint: The minimum number of messages to play for on chatter event. + value: 1 + required: true + property_type: number + - id: messages_max + name: Max messages + hint: The maximum number of messages to play for on chatter event. + value: 5 + required: true + property_type: number + - id: participants_min + name: Min participants + hint: The minimum number of participants in the chatter. + value: 2 + required: true + property_type: number + - id: participants_max + name: Max participants + hint: The maximum number of participants in the chatter. + value: 3 + required: true + property_type: number + - id: force_radio_sound + name: Force radio effect + hint: Overwrites wingman sound effects for radio chatter with selected radio effects. Needs at least one valid value in "Availale radio effects". + value: True + required: false + property_type: boolean + - id: radio_sounds + name: Availale radio effects + hint: A list of radio effects seperated by commas that are randomly used when "Force radio effects" is enabled. Possible values are "low", "medium" and "high". + value: "low, medium" + required: false + property_type: string + - id: auto_start + name: Auto start + hint: Automatically start the radio chatter with your Wingman. + value: False + required: false + property_type: boolean + - id: volume + name: Volume + hint: The volume (relative to the Wingman's volume) of the radio chatter. Must be between 0.0 (silent) and 1.0 (same volume as Wingman). + value: 0.5 + required: false + property_type: slider + options: + - label: "min" + value: 0.0 + - label: "max" + value: 1.0 + - label: "step" + value: 0.1 + - id: print_chatter + name: Print chatter + hint: Print the generated chatter to message overview. + value: True + required: false + property_type: boolean + - id: radio_knowledge + name: Radio knowledge + hint: If enabled, the radio chatter messages will be added to the wingman conversation. This way you can talk with your wingman about the radio chatter. + value: False + required: false + property_type: boolean + - id: use_beeps + name: Use beeps + hint: Use beeps to indicate the start and end of a radio chatter messages. + value: True + required: false + property_type: boolean diff --git a/templates/migration/1_5_0/skills/radio_chatter/logo.png b/templates/migration/1_5_0/skills/radio_chatter/logo.png new file mode 100644 index 00000000..d2284125 Binary files /dev/null and b/templates/migration/1_5_0/skills/radio_chatter/logo.png differ diff --git a/templates/migration/1_5_0/skills/radio_chatter/main.py b/templates/migration/1_5_0/skills/radio_chatter/main.py new file mode 100644 index 00000000..71994968 --- /dev/null +++ b/templates/migration/1_5_0/skills/radio_chatter/main.py @@ -0,0 +1,540 @@ +import time +import copy +from os import path +from random import randrange +from typing import TYPE_CHECKING +from api.interface import ( + SettingsConfig, + SkillConfig, + VoiceSelection, + WingmanInitializationError, +) +from api.enums import ( + LogType, + WingmanInitializationErrorType, + TtsProvider, + WingmanProTtsProvider, + SoundEffect, +) +from services.file import get_writable_dir +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + +class RadioChatter(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + self.file_path = get_writable_dir(path.join("skills", "radio_chatter", "data")) + + self.last_message = None + self.radio_status = False + self.loaded = False + + self.prompt = None + self.voices = list[VoiceSelection] + self.interval_min = None + self.interval_max = None + self.messages_min = None + self.messages_max = None + self.participants_min = None + self.participants_max = None + self.force_radio_sound = False + self.radio_sounds = [] + self.use_beeps = False + self.auto_start = False + self.volume = 1.0 + self.print_chatter = False + self.radio_knowledge = False + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + self.prompt = self.retrieve_custom_property_value("prompt", errors) + + # prepare voices + voices: list[VoiceSelection] = self.retrieve_custom_property_value( + "voices", errors + ) + if voices: + # we have to initiate all providers here + # we do no longer check voice availability or validate the structure + + initiated_providers = [] + initiate_provider_error = False + + for voice in voices: + voice_provider = voice.provider + if voice_provider not in initiated_providers: + initiated_providers.append(voice_provider) + + # initiate provider + if voice_provider == TtsProvider.OPENAI and not self.wingman.openai: + await self.wingman.validate_and_set_openai(errors) + if len(errors) > 0: + initiate_provider_error = True + elif ( + voice_provider == TtsProvider.AZURE + and not self.wingman.openai_azure + ): + await self.wingman.validate_and_set_azure(errors) + if len(errors) > 0: + initiate_provider_error = True + elif ( + voice_provider == TtsProvider.ELEVENLABS + and not self.wingman.elevenlabs + ): + await self.wingman.validate_and_set_elevenlabs(errors) + if len(errors) > 0: + initiate_provider_error = True + elif ( + voice_provider == TtsProvider.WINGMAN_PRO + and not self.wingman.wingman_pro + ): + await self.wingman.validate_and_set_wingman_pro() + + if not initiate_provider_error: + self.voices = voices + + self.interval_min = self.retrieve_custom_property_value("interval_min", errors) + if self.interval_min is not None and self.interval_min < 1: + errors.append( + WingmanInitializationError( + wingman_name=self.wingman.name, + message="Invalid value for 'interval_min'. Expected a number of one or larger.", + error_type=WingmanInitializationErrorType.INVALID_CONFIG, + ) + ) + self.interval_max = self.retrieve_custom_property_value("interval_max", errors) + if ( + self.interval_max is not None + and self.interval_max < 1 + or (self.interval_min is not None and self.interval_max < self.interval_min) + ): + errors.append( + WingmanInitializationError( + wingman_name=self.wingman.name, + message="Invalid value for 'interval_max'. Expected a number greater than or equal to 'interval_min'.", + error_type=WingmanInitializationErrorType.INVALID_CONFIG, + ) + ) + self.messages_min = self.retrieve_custom_property_value("messages_min", errors) + if self.messages_min is not None and self.messages_min < 1: + errors.append( + WingmanInitializationError( + wingman_name=self.wingman.name, + message="Invalid value for 'messages_min'. Expected a number of one or larger.", + error_type=WingmanInitializationErrorType.INVALID_CONFIG, + ) + ) + self.messages_max = self.retrieve_custom_property_value("messages_max", errors) + if ( + self.messages_max is not None + and self.messages_max < 1 + or (self.messages_min is not None and self.messages_max < self.messages_min) + ): + errors.append( + WingmanInitializationError( + wingman_name=self.wingman.name, + message="Invalid value for 'messages_max'. Expected a number greater than or equal to 'messages_min'.", + error_type=WingmanInitializationErrorType.INVALID_CONFIG, + ) + ) + self.participants_min = self.retrieve_custom_property_value( + "participants_min", errors + ) + if self.participants_min is not None and self.participants_min < 1: + errors.append( + WingmanInitializationError( + wingman_name=self.wingman.name, + message="Invalid value for 'participants_min'. Expected a number of one or larger.", + error_type=WingmanInitializationErrorType.INVALID_CONFIG, + ) + ) + self.participants_max = self.retrieve_custom_property_value( + "participants_max", errors + ) + if ( + self.participants_max is not None + and self.participants_max < 1 + or ( + self.participants_min is not None + and self.participants_max < self.participants_min + ) + ): + errors.append( + WingmanInitializationError( + wingman_name=self.wingman.name, + message="Invalid value for 'participants_max'. Expected a number greater than or equal to 'participants_min'.", + error_type=WingmanInitializationErrorType.INVALID_CONFIG, + ) + ) + + if not self.voices or self.participants_max > len(self.voices): + errors.append( + WingmanInitializationError( + wingman_name=self.wingman.name, + message="Not enough voices available for the configured number of max participants.", + error_type=WingmanInitializationErrorType.INVALID_CONFIG, + ) + ) + + self.force_radio_sound = self.retrieve_custom_property_value( + "force_radio_sound", errors + ) + + self.auto_start = self.retrieve_custom_property_value("auto_start", errors) + + self.volume = self.retrieve_custom_property_value("volume", errors) or 0.5 + if self.volume < 0 or self.volume > 1: + errors.append( + WingmanInitializationError( + wingman_name=self.wingman.name, + message="Invalid value for 'volume'. Expected a number between 0 and 1.", + error_type=WingmanInitializationErrorType.INVALID_CONFIG, + ) + ) + self.print_chatter = self.retrieve_custom_property_value( + "print_chatter", errors + ) + self.radio_knowledge = self.retrieve_custom_property_value( + "radio_knowledge", errors + ) + radio_sounds = self.retrieve_custom_property_value("radio_sounds", errors) + # split by comma + if radio_sounds: + radio_sounds = radio_sounds.lower().replace(" ", "").split(",") + if "low" in radio_sounds: + self.radio_sounds.append(SoundEffect.LOW_QUALITY_RADIO) + if "medium" in radio_sounds: + self.radio_sounds.append(SoundEffect.MEDIUM_QUALITY_RADIO) + if "high" in radio_sounds: + self.radio_sounds.append(SoundEffect.HIGH_END_RADIO) + if not self.radio_sounds: + self.force_radio_sound = False + self.use_beeps = self.retrieve_custom_property_value("use_beeps", errors) + + return errors + + async def prepare(self) -> None: + self.loaded = True + if self.auto_start: + self.threaded_execution(self._init_chatter) + + async def unload(self) -> None: + self.loaded = False + self.radio_status = False + + def randrange(self, start, stop=None): + if start == stop: + return start + random = randrange(start, stop) + return random + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "turn_on_radio", + { + "type": "function", + "function": { + "name": "turn_on_radio", + "description": "Turn the radio on to pick up some chatter on open frequencies.", + }, + }, + ), + ( + "turn_off_radio", + { + "type": "function", + "function": { + "name": "turn_off_radio", + "description": "Turn the radio off to no longer pick up pick up chatter on open frequencies.", + }, + }, + ), + ( + "radio_status", + { + "type": "function", + "function": { + "name": "radio_status", + "description": "Get the status (on/off) of the radio.", + }, + }, + ), + ] + return tools + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + function_response = "" + instant_response = "" + + if tool_name in ["turn_on_radio", "turn_off_radio", "radio_status"]: + if self.settings.debug_mode: + self.start_execution_benchmark() + + if tool_name == "turn_on_radio": + if self.radio_status: + function_response = "Radio is already on." + else: + self.threaded_execution(self._init_chatter) + function_response = "Radio is now on." + elif tool_name == "turn_off_radio": + if self.radio_status: + self.radio_status = False + function_response = "Radio is now off." + else: + function_response = "Radio is already off." + elif tool_name == "radio_status": + if self.radio_status: + function_response = "Radio is on." + else: + function_response = "Radio is off." + + if self.settings.debug_mode: + await self.print_execution_time() + + return function_response, instant_response + + async def _init_chatter(self) -> None: + """Start the radio chatter.""" + + self.radio_status = True + time.sleep(max(5, self.interval_min)) # sleep for min 5s else min interval + + while self.is_active(): + await self._generate_chatter() + interval = self.randrange(self.interval_min, self.interval_max) + time.sleep(interval) + + def is_active(self) -> bool: + return self.radio_status and self.loaded + + async def _generate_chatter(self): + if not self.is_active(): + return + + count_message = self.randrange(self.messages_min, self.messages_max) + count_participants = self.randrange( + self.participants_min, self.participants_max + ) + + messages = [ + { + "role": "system", + "content": f""" + ## Must follow these rules + - There are {count_participants} participant(s) in the conversation/monolog + - The conversation/monolog must contain exactly {count_message} messages between the participants or in the monolog + - Each new line in your answer represents a new message + - Use matching call signs for the participants + + ## Sample response + Name1: Message Content + Name2: Message Content + Name3: Message Content + Name2: Message Content + ... + """, + }, + { + "role": "user", + "content": str(self.prompt), + }, + ] + completion = await self.llm_call(messages) + messages = ( + completion.choices[0].message.content + if completion and completion.choices + else "" + ) + + if not messages: + return + + clean_messages = [] + voice_participant_mapping = {} + for message in messages.split("\n"): + if not message: + continue + + # get name before first ":" + name = message.split(":")[0].strip() + text = message.split(":", 1)[1].strip() + + if name not in voice_participant_mapping: + voice_participant_mapping[name] = None + + clean_messages.append((name, text)) + + original_voice_setting = await self._get_original_voice_setting() + elevenlabs_streaming = self.wingman.config.elevenlabs.output_streaming + original_sound_config = copy.deepcopy(self.wingman.config.sound) + + # copy for volume and effects + custom_sound_config = copy.deepcopy(self.wingman.config.sound) + custom_sound_config.play_beep = self.use_beeps + custom_sound_config.play_beep_apollo = False + custom_sound_config.volume = custom_sound_config.volume * self.volume + + voice_index = await self._get_random_voice_index(len(voice_participant_mapping)) + if not voice_index: + return + for i, name in enumerate(voice_participant_mapping): + sound_config = original_sound_config + if self.force_radio_sound: + sound_config = copy.deepcopy(custom_sound_config) + sound_config.effects = [ + self.radio_sounds[self.randrange(len(self.radio_sounds))] + ] + + voice_participant_mapping[name] = (voice_index[i], sound_config) + + for name, text in clean_messages: + if not self.is_active(): + return + + # wait for audio_player idleing + while self.wingman.audio_player.is_playing: + time.sleep(2) + + if not self.is_active(): + return + + voice_index, sound_config = voice_participant_mapping[name] + voice_setting = self.voices[voice_index] + + await self._switch_voice(voice_setting) + if self.print_chatter: + await self.printr.print_async( + text=f"Background radio ({name}): {text}", + color=LogType.INFO, + source_name=self.wingman.name, + ) + self.threaded_execution(self.wingman.play_to_user, text, True, sound_config) + if self.radio_knowledge: + await self.wingman.add_assistant_message( + f"Background radio chatter: {text}" + ) + while not self.wingman.audio_player.is_playing: + time.sleep(0.1) + await self._switch_voice(original_voice_setting, elevenlabs_streaming) + + while self.wingman.audio_player.is_playing: + time.sleep(1) # stay in function call until last message got played + + async def _get_random_voice_index(self, count: int) -> list[int]: + """Switch voice to a random voice from the list.""" + + if count > len(self.voices): + return [] + + if count == len(self.voices): + return list(range(len(self.voices))) + + voice_index = [] + for i in range(count): + while True: + index = self.randrange(len(self.voices)) - 1 + if index not in voice_index: + voice_index.append(index) + break + + return voice_index + + async def _switch_voice( + self, voice_setting: VoiceSelection = None, elevenlabs_streaming: bool = False + ) -> None: + """Switch voice to the given voice setting.""" + + if not voice_setting: + return + + voice_provider = voice_setting.provider + voice = voice_setting.voice + voice_name = None + error = False + + if voice_provider == TtsProvider.WINGMAN_PRO: + if voice_setting.subprovider == WingmanProTtsProvider.OPENAI: + voice_name = voice.value + self.wingman.config.openai.tts_voice = voice + elif voice_setting.subprovider == WingmanProTtsProvider.AZURE: + voice_name = voice + self.wingman.config.azure.tts.voice = voice + elif voice_provider == TtsProvider.OPENAI: + voice_name = voice.value + self.wingman.config.openai.tts_voice = voice + elif voice_provider == TtsProvider.ELEVENLABS: + voice_name = voice.name or voice.id + self.wingman.config.elevenlabs.voice = voice + self.wingman.config.elevenlabs.output_streaming = elevenlabs_streaming + elif voice_provider == TtsProvider.AZURE: + voice_name = voice + self.wingman.config.azure.tts.voice = voice + elif voice_provider == TtsProvider.XVASYNTH: + voice_name = voice.voice_name + self.wingman.config.xvasynth.voice = voice + elif voice_provider == TtsProvider.EDGE_TTS: + voice_name = voice + self.wingman.config.edge_tts.voice = voice + else: + error = True + + if error or not voice_name or not voice_provider: + await self.printr.print_async( + "Voice switching failed due to an unknown voice provider/subprovider.", + LogType.ERROR, + ) + return + + if self.settings.debug_mode: + await self.printr.print_async( + f"Switching voice to {voice_name} ({voice_provider.value})" + ) + + self.wingman.config.features.tts_provider = voice_provider + + async def _get_original_voice_setting(self) -> VoiceSelection: + voice_provider = self.wingman.config.features.tts_provider + voice_subprovider = None + voice = None + + if voice_provider == TtsProvider.EDGE_TTS: + voice = self.wingman.config.edge_tts.voice + elif voice_provider == TtsProvider.ELEVENLABS: + voice = self.wingman.config.elevenlabs.voice + elif voice_provider == TtsProvider.AZURE: + voice = self.wingman.config.azure.tts.voice + elif voice_provider == TtsProvider.XVASYNTH: + voice = self.wingman.config.xvasynth.voice + elif voice_provider == TtsProvider.OPENAI: + voice = self.wingman.config.openai.tts_voice + elif voice_provider == TtsProvider.WINGMAN_PRO: + voice_subprovider = self.wingman.config.wingman_pro.tts_provider + if ( + self.wingman.config.wingman_pro.tts_provider + == WingmanProTtsProvider.OPENAI + ): + voice = self.wingman.config.openai.tts_voice + elif ( + self.wingman.config.wingman_pro.tts_provider + == WingmanProTtsProvider.AZURE + ): + voice = self.wingman.config.azure.tts.voice + else: + return None + + return VoiceSelection( + provider=voice_provider, subprovider=voice_subprovider, voice=voice + ) diff --git a/templates/migration/1_5_0/skills/spotify/default_config.yaml b/templates/migration/1_5_0/skills/spotify/default_config.yaml new file mode 100644 index 00000000..36cb8568 --- /dev/null +++ b/templates/migration/1_5_0/skills/spotify/default_config.yaml @@ -0,0 +1,72 @@ +name: Spotify +module: skills.spotify.main +category: general +description: + en: Control Spotify using the the WebAPI. Play songs, artists playlists. Control playback, volume, and more. All powered by AI! + de: Steuere Spotify mit der WebAPI. Spiele Lieder, Künstler und Playlists ab. Steuere die Wiedergabe, Lautstärke und mehr. Alles von KI gesteuert! +hint: + en: + de: +examples: + - question: + en: Play the song Californication. + de: Spiele das Lied Californication. + answer: + en: Now playing 'Californication' by Red Hot Chili Peppers. + de: \'Californication\' von Red Hot Chili Peppers wird abgespielt. + - question: + en: What's the current song? + de: Wie heißt das aktuelle Lied? + answer: + en: You are currently listening to 'Californication' by Red Hot Chili Peppers. + de: Du hörst gerade 'Californication' von Red Hot Chili Peppers. + - question: + en: My girlfriend left me. Play a really sad song to match my mood. + de: Meine Freundin hat mich verlassen. Spiele ein wirklich trauriges Lied, das zu meiner Stimmung passt. + answer: + en: I'm sorry for you. Now playing 'Someone Like You' by Adele. + de: Es tut mir leid für dich. Jetzt wird "Someone Like You" von Adele gespielt. + - question: + en: Play the most popular song from the musical Les Miserables. + de: Spiele das beliebteste Lied aus dem Musical Les Miserables. + answer: + en: Playing 'I Dreamed a Dream' from Les Miserables. + de: \'I Dreamed a Dream\' aus Les Miserables wird abgespielt. + - question: + en: That's a cover song. Play the real version! + de: Das ist ein Cover-Song. Spiele die echte Version aus dem Film! + answer: + en: Playing 'I Dreamed a Dream' by Anne Hathaway from Les Miserables. + de: Spiele 'I Dreamed a Dream' von Anne Hathaway aus Les Miserables. + - question: + en: What are my Spotify devices? + de: Was sind meine Spotify-Geräte? + answer: + en: You have 2 devices available - 'Gaming PC' and 'iPhone'. + de: Du hast 2 Geräte verfügbar - 'Gaming PC' und 'iPhone'. + - question: + en: Play the music on my iPhone. + de: Spiele die Musik auf meinem iPhone ab. + answer: + en: Moves the current playback to your iPhone + de: Überträgt die Spotify-Wiedergabe auf das iPhone +prompt: | + You are also an expert DJ and music player interface responsible to control the Spotify music player client of the user. + You have access to different tools or functions you can call to control the Spotify client using its API. + If the user asks you to play a song, resume, stop or pause the current playback etc. use your tools to do so. + For some functions, you need parameters like the song or artist name. Try to extract these values from the + player's request. + Never invent any function parameters. Ask the user for clarification if you are not sure or cannot extract function parameters. +custom_properties: + - hint: Create an app in the Spotify Dashboard at https://developer.spotify.com/dashboard. You'll find the Client ID in the Settings of that app. + id: spotify_client_id + name: Spotify Client ID + required: true + value: enter-your-client-id-here + property_type: string + - hint: Create an app in the Spotify Dashboard at https://developer.spotify.com/dashboard. In the Settings of the app, add http://127.0.0.1:8082 (or any other free port) as Redirect URL. Then enter the same value here. + id: spotify_redirect_url + name: Spotify Redirect URL + required: true + value: http://127.0.0.1:8082 + property_type: string diff --git a/templates/migration/1_5_0/skills/spotify/logo.png b/templates/migration/1_5_0/skills/spotify/logo.png new file mode 100644 index 00000000..4ceb1369 Binary files /dev/null and b/templates/migration/1_5_0/skills/spotify/logo.png differ diff --git a/templates/migration/1_5_0/skills/spotify/main.py b/templates/migration/1_5_0/skills/spotify/main.py new file mode 100644 index 00000000..82d3cd2d --- /dev/null +++ b/templates/migration/1_5_0/skills/spotify/main.py @@ -0,0 +1,376 @@ +from os import path +from typing import TYPE_CHECKING +import spotipy +from spotipy.oauth2 import SpotifyOAuth +from api.enums import LogType +from api.interface import SettingsConfig, SkillConfig, WingmanInitializationError +from services.file import get_writable_dir +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class Spotify(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + self.data_path = get_writable_dir(path.join("skills", "spotify", "data")) + self.spotify: spotipy.Spotify = None + self.available_devices = [] + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + secret = await self.retrieve_secret("spotify_client_secret", errors) + client_id = self.retrieve_custom_property_value("spotify_client_id", errors) + redirect_url = self.retrieve_custom_property_value( + "spotify_redirect_url", errors + ) + if secret and client_id and redirect_url: + # now that we have everything, initialize the Spotify client + cache_handler = spotipy.cache_handler.CacheFileHandler( + cache_path=f"{self.data_path}/.cache" + ) + self.spotify = spotipy.Spotify( + auth_manager=SpotifyOAuth( + client_id=client_id, + client_secret=secret, + redirect_uri=redirect_url, + scope=[ + "user-library-read", + "user-read-currently-playing", + "user-read-playback-state", + "user-modify-playback-state", + "streaming", + # "playlist-modify-public", + "playlist-read-private", + # "playlist-modify-private", + "user-library-modify", + # "user-read-recently-played", + # "user-top-read" + ], + cache_handler=cache_handler, + ) + ) + return errors + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "control_spotify_device", + { + "type": "function", + "function": { + "name": "control_spotify_device", + "description": "Retrieves or sets the audio device of he user that Spotify songs are played on.", + "parameters": { + "type": "object", + "properties": { + "action": { + "type": "string", + "description": "The playback action to take", + "enum": ["get_devices", "set_active_device"], + }, + "device_name": { + "type": "string", + "description": "The name of the device to set as the active device.", + "enum": [ + device["name"] + for device in self.get_available_devices() + ], + }, + }, + "required": ["action"], + }, + }, + }, + ), + ( + "control_spotify_playback", + { + "type": "function", + "function": { + "name": "control_spotify_playback", + "description": "Control the Spotify audio playback with actions like play, pause/stop or play the previous/next track or set the volume level.", + "parameters": { + "type": "object", + "properties": { + "action": { + "type": "string", + "description": "The playback action to take", + "enum": [ + "play", + "pause", + "stop", + "play_next_track", + "play_previous_track", + "set_volume", + "mute", + "get_current_track", + "like_song", + ], + }, + "volume_level": { + "type": "number", + "description": "The volume level to set (in percent)", + }, + }, + "required": ["action"], + }, + }, + }, + ), + ( + "play_song_with_spotify", + { + "type": "function", + "function": { + "name": "play_song_with_spotify", + "description": "Find a song with Spotify to either play it immediately or queue it.", + "parameters": { + "type": "object", + "properties": { + "track": { + "type": "string", + "description": "The name of the track to play", + }, + "artist": { + "type": "string", + "description": "The artist that created the track", + }, + "queue": { + "type": "boolean", + "description": "If true, the song will be queued and played later. Otherwise it will be played immediately.", + }, + }, + }, + }, + }, + ), + ( + "interact_with_spotify_playlists", + { + "type": "function", + "function": { + "name": "interact_with_spotify_playlists", + "description": "Play a song from a Spotify playlist or add a song to a playlist.", + "parameters": { + "type": "object", + "properties": { + "action": { + "type": "string", + "description": "The action to take", + "enum": ["get_playlists", "play_playlist"], + }, + "playlist": { + "type": "string", + "description": "The name of the playlist to interact with", + "enum": [ + playlist["name"] + for playlist in self.get_user_playlists() + ], + }, + }, + "required": ["action"], + }, + }, + }, + ), + ] + return tools + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + instant_response = "" # not used here + function_response = "Unable to control Spotify." + + if tool_name not in [ + "control_spotify_device", + "control_spotify_playback", + "play_song_with_spotify", + "interact_with_spotify_playlists", + ]: + return function_response, instant_response + + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Spotify: Executing {tool_name} with parameters: {parameters}", + color=LogType.INFO, + ) + + action = parameters.get("action", None) + parameters.pop("action", None) + function = getattr(self, action if action else tool_name) + function_response = function(**parameters) + + if self.settings.debug_mode: + await self.print_execution_time() + + return function_response, instant_response + + # HELPERS + + def get_available_devices(self): + devices = [ + device + for device in self.spotify.devices().get("devices") + if not device["is_restricted"] + ] + return devices + + def get_active_devices(self): + active_devices = [ + device + for device in self.spotify.devices().get("devices") + if device["is_active"] + ] + return active_devices + + def get_user_playlists(self): + playlists = self.spotify.current_user_playlists() + return playlists["items"] + + def get_playlist_uri(self, playlist_name: str): + playlists = self.spotify.current_user_playlists() + playlist = next( + ( + playlist + for playlist in playlists["items"] + if playlist["name"].lower() == playlist_name.lower() + ), + None, + ) + return playlist["uri"] if playlist else None + + # ACTIONS + + def get_devices(self): + active_devices = self.get_active_devices() + active_device_names = ", ".join([device["name"] for device in active_devices]) + available_device_names = ", ".join( + [device["name"] for device in self.get_available_devices()] + ) + if active_devices and len(active_devices) > 0: + return f"Your available devices are: {available_device_names}. Your active devices are: {active_device_names}." + if available_device_names: + return f"No active device found but these are the available devices: {available_device_names}" + + return "No devices found. Start Spotify on one of your devices first, then try again." + + def set_active_device(self, device_name: str): + if device_name: + device = next( + ( + device + for device in self.get_available_devices() + if device["name"] == device_name + ), + None, + ) + if device: + self.spotify.transfer_playback(device["id"]) + return "OK" + else: + return f"Device '{device_name}' not found." + + return "Device name not provided." + + def play(self): + self.spotify.start_playback() + return "OK" + + def pause(self): + self.spotify.pause_playback() + return "OK" + + def stop(self): + return self.pause() + + def play_previous_track(self): + self.spotify.previous_track() + return "OK" + + def play_next_track(self): + self.spotify.next_track() + return "OK" + + def set_volume(self, volume_level: int): + if volume_level: + self.spotify.volume(volume_level) + return "OK" + + return "Volume level not provided." + + def mute(self): + self.spotify.volume(0) + return "OK" + + def get_current_track(self): + current_playback = self.spotify.current_playback() + if current_playback: + artist = current_playback["item"]["artists"][0]["name"] + track = current_playback["item"]["name"] + return f"Currently playing '{track}' by '{artist}'." + + return "No track playing." + + def like_song(self): + current_playback = self.spotify.current_playback() + if current_playback: + track_id = current_playback["item"]["id"] + self.spotify.current_user_saved_tracks_add([track_id]) + return "Track saved to 'Your Music' library." + + return "No track playing. Play a song, then tell me to like it." + + def play_song_with_spotify( + self, track: str = None, artist: str = None, queue: bool = False + ): + if not track and not artist: + return "What song or artist would you like to play?" + results = self.spotify.search(q=f"{track} {artist}", type="track", limit=1) + track = results["tracks"]["items"][0] + if track: + track_name = track["name"] + artist_name = track["artists"][0]["name"] + try: + if queue: + self.spotify.add_to_queue(track["uri"]) + return f"Added '{track_name}' by '{artist_name}' to the queue." + else: + self.spotify.start_playback(uris=[track["uri"]]) + return f"Now playing '{track_name}' by '{artist_name}'." + except spotipy.SpotifyException as e: + if e.reason == "NO_ACTIVE_DEVICE": + return "No active device found. Start Spotify on one of your devices first, then play a song or tell me to activate a device." + return f"An error occurred while trying to play the song. Code: {e.code}, Reason: '{e.reason}'" + + return "No track found." + + def get_playlists(self): + playlists = self.get_user_playlists() + playlist_names = ", ".join([playlist["name"] for playlist in playlists]) + if playlist_names: + return f"Your playlists are: {playlist_names}" + + return "No playlists found." + + def play_playlist(self, playlist: str = None): + if not playlist: + return "Which playlist would you like to play?" + + playlist_uri = self.get_playlist_uri(playlist) + if playlist_uri: + self.spotify.start_playback(context_uri=playlist_uri) + return f"Playing playlist '{playlist}'." + + return f"Playlist '{playlist}' not found." diff --git a/templates/migration/1_5_0/skills/spotify/requirements.txt b/templates/migration/1_5_0/skills/spotify/requirements.txt new file mode 100644 index 00000000..dbc2911c --- /dev/null +++ b/templates/migration/1_5_0/skills/spotify/requirements.txt @@ -0,0 +1 @@ +spotipy==2.23.0 \ No newline at end of file diff --git a/templates/migration/1_5_0/skills/star_head/default_config.yaml b/templates/migration/1_5_0/skills/star_head/default_config.yaml new file mode 100644 index 00000000..478cdff0 --- /dev/null +++ b/templates/migration/1_5_0/skills/star_head/default_config.yaml @@ -0,0 +1,46 @@ +name: StarHead +module: skills.star_head.main +category: star_citizen +description: + en: Use the StarHead API to retrieve detailed information about spaceships, weapons and more. StarHead can also calculate optimal trading routes based on live data. + de: Nutze die StarHead API, um detaillierte Informationen über Raumschiffe, Waffen und mehr abzurufen. StarHead kann auch optimale Handelsrouten anhand von Live-Daten berechnen. +# hint: +# en: +# de: +examples: + - question: + en: I want to trade. What's the best route? + de: Ich möchte handeln. Was ist die beste Route? + answer: + en: To provide you with the best trading route, I need to know your ship model, your current location, and your available budget. Could you please provide these details? + de: Um dir die beste Handelsroute anbieten zu können, muss ich dein Schiffsmodell, deinen aktuellen Standort und dein verfügbares Budget kennen. Kannst du mir diese Angaben bitte mitteilen? + - question: + en: I'm flying a Caterpillar and am near Yela. I have 100.000 credits to spend. + de: Ich fliege eine Caterpillar und bin in der Nähe von Yela. Ich habe 100.000 Credits auszugeben. + answer: + en: You can buy Stims at Deakins Research Outpost near Yela for 2.8 credits/unit and sell them at CRU-L1 Ambitious Dream Station for 3.85 credits/unit. The total profit for this route is approximately 37499 credits, and the travel time estimation is 41 minutes. + de: Du kannst Stims bei Deakins Research Outpost in der Nähe von Yela für 2,8 Credits/Stück kaufen und sie bei CRU-L1 Ambitious Dream Station für 3,85 Credits/Stück verkaufen. Der Gesamtgewinn für diese Route beträgt ca. 37499 Credits, und die geschätzte Reisezeit beträgt 41 Minuten. + - question: + en: What can you tell me about the Caterpillar? + de: Was kannst du mir über die Caterpillar erzählen? + answer: + en: The Constellation Taurus is a dedicated freighter, designed for hauling cargo. It has a cargo capacity of 174 SCU and is fully configurable but without all the bells and whistles found on other Constellation variants. On the other hand, the Constellation Andromeda is a multi-person freighter and the most popular ship in RSI's current production array. It has a cargo capacity of 96 SCU and is beloved by smugglers and merchants alike for its modular and high-powered capabilities. Both are part of the Constellation series, but the Taurus specifically caters to dedicated freight operations whereas the Andromeda serves as a multi-person versatile ship. + de: Die Constellation Taurus ist ein reiner Frachter, der für den Transport von Fracht entwickelt wurde. Er hat eine Ladekapazität von 174 SCU und ist voll konfigurierbar, hat aber nicht den ganzen Schnickschnack der anderen Constellation-Varianten. Die Constellation Andromeda hingegen ist ein Mehrpersonen-Frachter und das beliebteste Schiff in der aktuellen Produktion von RSI. Sie hat eine Ladekapazität von 96 SCU und ist bei Schmugglern und Händlern wegen ihrer modularen und leistungsstarken Fähigkeiten gleichermaßen beliebt. Beide gehören zur Constellation-Serie, aber die Taurus ist speziell für den reinen Frachtverkehr gedacht, während die Andromeda ein vielseitiges Schiff für mehrere Personen ist. +prompt: | + You also have access to the StarHead API which you can use to access live trading data and to retrieve additional information about spaceships in Star Citizen. + Your job is to find good trading routes for the user based on his/her ship, current location and available budget. + The user can also ask you about details of specific ships, components, weapons, and more. + You always use the tools available to you to retrieve the required information and to provide the user with the information. +custom_properties: + - hint: The URL of the StarHead API. + id: starhead_api_url + name: StarHead API URL + required: true + value: https://api.star-head.de + property_type: string + - hint: The URL of the Star Citizen Wiki API. + id: star_citizen_wiki_api_url + name: Star Citizen Wiki API URL + required: true + value: https://api.star-citizen.wiki/api/v2 + property_type: string diff --git a/templates/migration/1_5_0/skills/star_head/logo.png b/templates/migration/1_5_0/skills/star_head/logo.png new file mode 100644 index 00000000..80c534cb Binary files /dev/null and b/templates/migration/1_5_0/skills/star_head/logo.png differ diff --git a/templates/migration/1_5_0/skills/star_head/main.py b/templates/migration/1_5_0/skills/star_head/main.py new file mode 100644 index 00000000..a5432b61 --- /dev/null +++ b/templates/migration/1_5_0/skills/star_head/main.py @@ -0,0 +1,275 @@ +import json +from typing import Optional +from typing import TYPE_CHECKING +import requests +from api.enums import LogType, WingmanInitializationErrorType +from api.interface import SettingsConfig, SkillConfig, WingmanInitializationError +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class StarHead(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + # config entry existence not validated yet. Assign later when checked! + self.starhead_url = "" + """The base URL of the StarHead API""" + + self.headers = {"x-origin": "wingman-ai"} + """Requireds header for the StarHead API""" + + self.timeout = 5 + """Global timeout for calls to the the StarHead API (in seconds)""" + + self.star_citizen_wiki_url = "" + + self.vehicles = [] + self.ship_names = [] + self.celestial_objects = [] + self.celestial_object_names = [] + self.quantum_drives = [] + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + self.starhead_url = self.retrieve_custom_property_value( + "starhead_api_url", errors + ) + self.star_citizen_wiki_url = self.retrieve_custom_property_value( + "star_citizen_wiki_api_url", errors + ) + + try: + await self._prepare_data() + except Exception as e: + errors.append( + WingmanInitializationError( + wingman_name=self.name, + message=f"Failed to load data from StarHead API: {e}", + error_type=WingmanInitializationErrorType.UNKNOWN, + ) + ) + + return errors + + async def _prepare_data(self): + self.vehicles = await self._fetch_data("vehicle") + self.ship_names = [ + self._format_ship_name(vehicle) + for vehicle in self.vehicles + if vehicle["type"] == "Ship" + ] + + self.celestial_objects = await self._fetch_data("celestialobject") + self.celestial_object_names = [ + celestial_object["name"] for celestial_object in self.celestial_objects + ] + + self.quantum_drives = await self._fetch_data( + "vehiclecomponent", {"typeFilter": 8} + ) + + async def _fetch_data( + self, endpoint: str, params: Optional[dict[str, any]] = None + ) -> list[dict[str, any]]: + url = f"{self.starhead_url}/{endpoint}" + + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Retrieving {url}", + color=LogType.INFO, + ) + + response = requests.get( + url, params=params, timeout=self.timeout, headers=self.headers + ) + response.raise_for_status() + if self.settings.debug_mode: + await self.print_execution_time() + + return response.json() + + def _format_ship_name(self, vehicle: dict[str, any]) -> str: + """Formats name by combining model and name, avoiding repetition""" + return vehicle["name"] + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + instant_response = "" + function_response = "" + + if tool_name == "get_best_trading_route": + function_response = await self._get_best_trading_route(**parameters) + if tool_name == "get_ship_information": + function_response = await self._get_ship_information(**parameters) + return function_response, instant_response + + async def is_waiting_response_needed(self, tool_name: str) -> bool: + return True + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "get_best_trading_route", + { + "type": "function", + "function": { + "name": "get_best_trading_route", + "description": "Finds the best trade route for a given spaceship and position.", + "parameters": { + "type": "object", + "properties": { + "ship": {"type": "string", "enum": self.ship_names}, + "position": { + "type": "string", + "enum": self.celestial_object_names, + }, + "moneyToSpend": {"type": "number"}, + }, + "required": ["ship", "position", "moneyToSpend"], + }, + }, + }, + ), + ( + "get_ship_information", + { + "type": "function", + "function": { + "name": "get_ship_information", + "description": "Gives information about the given ship.", + "parameters": { + "type": "object", + "properties": { + "ship": {"type": "string", "enum": self.ship_names}, + }, + "required": ["ship"], + }, + }, + }, + ), + ] + + return tools + + async def _get_ship_information(self, ship: str) -> str: + try: + response = requests.get( + url=f"{self.star_citizen_wiki_url}/vehicles/{ship}", + timeout=self.timeout, + headers=self.headers, + ) + response.raise_for_status() + except requests.exceptions.RequestException as e: + return f"Failed to fetch ship information: {e}" + ship_details = json.dumps(response.json()) + return ship_details + + async def _get_best_trading_route( + self, ship: str, position: str, moneyToSpend: float + ) -> str: + """Calculates the best trading route for the specified ship and position. + Note that the function arguments have to match the funtion_args from OpenAI, hence the camelCase! + """ + + cargo, qd = await self._get_ship_details(ship) + if not cargo or not qd: + return f"Could not find ship '{ship}' in the StarHead database." + + celestial_object_id = self._get_celestial_object_id(position) + if not celestial_object_id: + return f"Could not find celestial object '{position}' in the StarHead database." + + data = { + "startCelestialObjectId": celestial_object_id, + "quantumDriveId": qd["id"] if qd else None, + "maxAvailablScu": cargo, + "maxAvailableMoney": moneyToSpend, + "useOnlyWeaponFreeZones": False, + "onlySingleSections": True, + } + url = f"{self.starhead_url}/trading" + try: + response = requests.post( + url=url, + json=data, + timeout=self.timeout, + headers=self.headers, + ) + response.raise_for_status() + except requests.exceptions.RequestException as e: + return f"Failed to fetch trading route: {e}" + + parsed_response = response.json() + if parsed_response: + section = parsed_response[0] + return json.dumps(section) + return f"No route found for ship '{ship}' at '{position}' with '{moneyToSpend}' aUEC." + + def _get_celestial_object_id(self, name: str) -> Optional[int]: + """Finds the ID of the celestial object with the specified name.""" + return next( + ( + obj["id"] + for obj in self.celestial_objects + if obj["name"].lower() == name.lower() + ), + None, + ) + + async def _get_ship_details( + self, ship_name: str + ) -> tuple[Optional[int], Optional[dict[str, any]]]: + """Gets ship details including cargo capacity and quantum drive information.""" + vehicle = next( + ( + v + for v in self.vehicles + if self._format_ship_name(v).lower() == ship_name.lower() + ), + None, + ) + if vehicle: + cargo = vehicle.get("scuCargo") + loadouts = await self._get_ship_loadout(vehicle.get("id")) + if loadouts: + loadout = next( + (l for l in loadouts.get("loadouts") if l["isDefaultLayout"]), None + ) + qd = next( + ( + qd + for qd in self.quantum_drives + for item in loadout.get("data") + if item.get("componentId") == qd.get("id") + ), + None, + ) + return cargo, qd + return None, None + + async def _get_ship_loadout( + self, ship_id: Optional[int] + ) -> Optional[dict[str, any]]: + """Retrieves loadout data for a given ship ID.""" + if ship_id: + try: + loadout = await self._fetch_data(f"vehicle/{ship_id}/loadout") + return loadout or None + except requests.HTTPError: + await self.printr.print_async( + f"Failed to fetch loadout data for ship with ID: {ship_id}", + color=LogType.ERROR, + ) + return None diff --git a/templates/migration/1_5_0/skills/timer/default_config.yaml b/templates/migration/1_5_0/skills/timer/default_config.yaml new file mode 100644 index 00000000..14137a6b --- /dev/null +++ b/templates/migration/1_5_0/skills/timer/default_config.yaml @@ -0,0 +1,24 @@ +name: Timer +module: skills.timer.main +category: general +description: + en: Gives your Wingman the ability to delay actions - be it a reminder, a command or a function call. + de: Gibt deinem Wingman die Möglichkeit, Aktionen zu verzögern - sei es eine Erinnerung, ein Befehl oder ein Funktionsaufruf. +examples: + - question: + en: Remind me in 60 minutes to take a break. + de: Erinnere mich in 60 Minuten daran, eine Pause zu machen. + answer: + en: (Tells you to take a break in 60 minutes) + de: (Erinnert dich in 60 Minuten daran, eine Pause zu machen) +prompt: | + You can also delay actions with the `set_timer` function. The function has 3 parameters: + + - The first argument is the time in seconds, + - the second argument is the function to call + - the third argument is a list of arguments for the function. + + Only use the `set_timer` function if the user requests to delay a function. + Never say a timer id, always use a decriptive name. + When confirming, keep it short and simple. + When reading out current timers, use the most suitable time unit. diff --git a/templates/migration/1_5_0/skills/timer/logo.png b/templates/migration/1_5_0/skills/timer/logo.png new file mode 100644 index 00000000..ba63e750 Binary files /dev/null and b/templates/migration/1_5_0/skills/timer/logo.png differ diff --git a/templates/migration/1_5_0/skills/timer/main.py b/templates/migration/1_5_0/skills/timer/main.py new file mode 100644 index 00000000..3cfbdeb7 --- /dev/null +++ b/templates/migration/1_5_0/skills/timer/main.py @@ -0,0 +1,569 @@ +import random +import string +import asyncio +import time +import json +from typing import TYPE_CHECKING +from api.interface import SettingsConfig, SkillConfig +from api.enums import ( + LogSource, + LogType, +) +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class Timer(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + self.timers = {} + self.available_tools = [] + self.active = False + + async def prepare(self) -> None: + self.active = True + self.threaded_execution(self.start_timer_worker) + + async def unload(self) -> None: + self.active = False + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "set_timer", + { + "type": "function", + "function": { + "name": "set_timer", + "description": "set_timer function to delay other available functions.", + "parameters": { + "type": "object", + "properties": { + "delay": { + "type": "number", + "description": "The delay/timer in seconds.", + }, + "is_loop": { + "type": "boolean", + "description": "If the timer should loop or not.", + }, + "loops": { + "type": "number", + "description": "The amount of loops the timer should do. -1 for infinite loops.", + }, + "function": { + "type": "string", + # "enum": self._get_available_tools(), # end up beeing a recursive nightmare + "description": "The name of the function to execute after the delay. Must be a function name from the available tools.", + }, + "parameters": { + "type": "object", + "description": "The parameters for the function to execute after the delay. Must be a valid object with the required properties to their values. Can not be empty.", + }, + }, + "required": ["delay", "function", "parameters"], + "optional": ["is_loop", "loops"], + }, + }, + }, + ), + ( + "get_timer_status", + { + "type": "function", + "function": { + "name": "get_timer_status", + "description": "Get a list of all running timers and their remaining time and id.", + }, + }, + ), + ( + "cancel_timer", + { + "type": "function", + "function": { + "name": "cancel_timer", + "description": "Cancel a running timer by its id. Use this in combination with set_timer to change timers.", + "parameters": { + "type": "object", + "properties": { + "id": { + "type": "string", + "description": "The id of the timer to cancel.", + }, + }, + "required": ["id"], + }, + }, + }, + ), + ( + "change_timer_settings", + { + "type": "function", + "function": { + "name": "change_timer_settings", + "description": "Change a timers loop and delay settings. Requires the id of the timer to change.", + "parameters": { + "type": "object", + "properties": { + "id": { + "type": "string", + "description": "The id of the timer to change.", + }, + "delay": { + "type": "number", + "description": "The new delay/timer in seconds.", + }, + "is_loop": { + "type": "boolean", + "description": "If the timer should loop or not.", + }, + "loops": { + "type": "number", + "description": "The amount of remaining loops the timer should do. -1 for infinite loops.", + }, + }, + "required": ["id", "delay", "is_loop", "loops"], + }, + }, + }, + ), + ( + "remind_me", + { + "type": "function", + "function": { + "name": "remind_me", + "description": "Must only be called with the set_timer function. Will remind the user with the given message.", + "parameters": { + "type": "object", + "properties": { + "message": { + "type": "string", + "description": 'The message of the reminder to say to the user. For example User: "Remind me to take a break." -> Message: "This is your reminder to take a break."', + }, + }, + "required": ["message"], + }, + }, + }, + ), + ] + return tools + + async def is_waiting_response_needed(self, tool_name: str) -> bool: + return tool_name in ["set_timer"] + + def _get_available_tools(self) -> list[dict[str, any]]: + tools = self.wingman.build_tools() + tool_names = [] + for tool in tools: + name = tool.get("function", {}).get("name", None) + if name: + tool_names.append(name) + + return tool_names + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + function_response = "" + instant_response = "" + + if tool_name in [ + "set_timer", + "get_timer_status", + "cancel_timer", + "change_timer_settings", + "remind_me", + ]: + if self.settings.debug_mode: + self.start_execution_benchmark() + + if tool_name == "set_timer": + function_response = await self.set_timer( + delay=parameters.get("delay", None), + is_loop=parameters.get("is_loop", False), + loops=parameters.get("loops", 1), + function=parameters.get("function", None), + parameters=parameters.get("parameters", {}), + ) + elif tool_name == "get_timer_status": + function_response = await self.get_timer_status() + elif tool_name == "cancel_timer": + function_response = await self.cancel_timer( + timer_id=parameters.get("id", None) + ) + elif tool_name == "change_timer_settings": + function_response = await self.change_timer_settings( + timer_id=parameters.get("id", None), + delay=parameters.get("delay", None), + is_loop=parameters.get("is_loop", False), + loops=parameters.get("loops", 1), + ) + elif tool_name == "remind_me": + function_response = await self.reminder( + message=parameters.get("message", None) + ) + + if self.settings.debug_mode: + await self.print_execution_time() + + return function_response, instant_response + + async def _get_tool_parameter_type_by_name(self, type_name: str) -> any: + if type_name == "object": + return dict + elif type_name == "string": + return str + elif type_name == "number": + return int + elif type_name == "boolean": + return bool + elif type_name == "array": + return list + else: + return None + + async def start_timer_worker(self) -> None: + while self.active: + await asyncio.sleep(2) + timers_to_delete = [] + for timer_id, timer in self.timers.items(): + delay = timer[0] + # function = timer[1] + # parameters = timer[2] + start_time = timer[3] + is_loop = timer[4] + loops = timer[5] + + if is_loop and loops == 0: + timers_to_delete.append(timer_id) + continue # skip timers marked for deletion + + if time.time() - start_time >= delay: + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.execute_timer(timer_id) + if self.settings.debug_mode: + await self.print_execution_time(True) + + # delete timers marked for deletion + for timer_id in timers_to_delete: + del self.timers[timer_id] + + self.timers = {} # clear timers + + async def execute_timer(self, timer_id: str) -> None: + if timer_id not in self.timers: + return + + # delay = self.timers[timer_id][0] + function = self.timers[timer_id][1] + parameters = self.timers[timer_id][2] + # start_time = self.timers[timer_id][3] + is_loop = self.timers[timer_id][4] + loops = self.timers[timer_id][5] + + response = await self.wingman.execute_command_by_function_call( + function, parameters + ) + if response: + summary = await self._summarize_timer_execution( + function, parameters, response + ) + await self.wingman.add_assistant_message(summary) + await self.printr.print_async( + f"{summary}", + color=LogType.POSITIVE, + source=LogSource.WINGMAN, + source_name=self.wingman.name, + skill_name=self.name, + ) + await self.wingman.play_to_user(summary, True) + + if not is_loop or loops == 1: + # we cant delete it here, because we are iterating over the timers in a sepereate thread + self.timers[timer_id][4] = True + self.timers[timer_id][5] = 0 + return + + self.timers[timer_id][3] = time.time() # reset start time + if loops > 0: + self.timers[timer_id][5] -= 1 # decrease remaining loops + + async def set_timer( + self, + delay: int = None, + is_loop: bool = False, + loops: int = -1, + function: str = None, + parameters: dict[str, any] = None, + ) -> str: + check_counter = 0 + max_checks = 2 + errors = [] + + while (check_counter == 0 or errors) and check_counter < max_checks: + errors = [] + + if delay is None or function is None: + errors.append("Missing delay or function.") + elif delay < 0: + errors.append("No timer set, delay must be greater than 0.") + + if "." in function: + function = function.split(".")[1] + + # check if tool call exists + tool_call = None + tool_call = next( + ( + tool + for tool in self.wingman.build_tools() + if tool.get("function", {}).get("name", False) == function + ), + None, + ) + + # if not valid it might be a command + if not tool_call and self.wingman.get_command(function): + parameters = {"command_name": function} + function = "execute_command" + + if not tool_call: + errors.append(f"Function {function} does not exist.") + else: + if tool_call.get("function", False) and tool_call.get( + "function", {} + ).get("parameters", False): + properties = ( + tool_call.get("function", {}) + .get("parameters", {}) + .get("properties", {}) + ) + required_parameters = ( + tool_call.get("function", {}) + .get("parameters", {}) + .get("required", []) + ) + + for name, value in properties.items(): + if name in parameters: + real_type = await self._get_tool_parameter_type_by_name( + value.get("type", "string") + ) + if not isinstance(parameters[name], real_type): + errors.append( + f"Parameter {name} must be of type {value.get('type', None)}, but is {type(parameters[name])}." + ) + elif value.get("enum", False) and parameters[ + name + ] not in value.get("enum", []): + errors.append( + f"Parameter {name} must be one of {value.get('enum', [])}, but is {parameters[name]}." + ) + if name in required_parameters: + required_parameters.remove(name) + + if required_parameters: + errors.append( + f"Missing required parameters: {required_parameters}." + ) + + check_counter += 1 + if errors: + # try to let it fix itself + message_history = [] + for message in self.wingman.messages: + role = ( + message.role + if hasattr(message, "role") + else message.get("role", False) + ) + if role in ["user", "assistant", "system"]: + message_history.append( + { + "role": role, + "content": ( + message.content + if hasattr(message, "content") + else message.get("content", False) + ), + } + ) + data = { + "original_set_timer_call": { + "delay": delay, + "is_loop": is_loop, + "loops": loops, + "function": function, + "parameters": parameters, + }, + "message_history": ( + message_history + if len(message_history) <= 10 + else message_history[:1] + message_history[-9:] + ), + "tool_calls_definition": self.wingman.build_tools(), + "errors": errors, + } + + messages = [ + { + "role": "system", + "content": """ + The **set_timer** tool got called by a request with parameters that are incomplete or do not match the given requirements. + Please adjust the parameters "function" and "parameters" to match the requirements of the designated tool. + Make use of the message_history with the user previously to figure out missing parameters or wrong types. + And the tool_calls_definition to see the available tools and their requirements. + Use the **errors** information to figure out what it missing or wrong. + + Provide me an answer **only containing a valid JSON object** with the following structure for example: + { + "delay": 10, + "is_loop": false, + "loops": 1, + "function": "function_name", + "parameters": { + "parameter_name": "parameter_value" + } + } + """, + }, + {"role": "user", "content": json.dumps(data, indent=4)}, + ] + json_retry = 0 + max_json_retries = 1 + valid_json = False + while not valid_json and json_retry < max_json_retries: + completion = await self.llm_call(messages) + data = completion.choices[0].message.content + messages.append( + { + "role": "assistant", + "content": data, + } + ) + # check if data is valid json + try: + if data.startswith("```json") and data.endswith("```"): + data = data[len("```json") : -len("```")].strip() + data = json.loads(data) + except json.JSONDecodeError: + messages.append( + { + "role": "user", + "content": "Data is not valid JSON. Please provide valid JSON data.", + } + ) + json_retry += 1 + continue + + valid_json = True + delay = data.get("delay", False) + is_loop = data.get("is_loop", False) + loops = data.get("loops", 1) + function = data.get("function", False) + parameters = data.get("parameters", {}) + + if errors: + return f""" + No timer set. Communicate these errors to the user. + But make sure to align them with the message history so far: {errors} + """ + + # generate a unique id for the timer + letters_and_digits = string.ascii_letters + string.digits + timer_id = "".join(random.choice(letters_and_digits) for _ in range(10)) + + # set timer + current_time = time.time() + self.timers[timer_id] = [ + delay, + function, + parameters, + current_time, + is_loop, + loops, + ] + + return f"Timer set with id {timer_id}.\n\n{await self.get_timer_status()}" + + async def _summarize_timer_execution( + self, function: str, parameters: dict[str, any], response: str + ) -> str: + self.wingman.messages.append( + { + "role": "user", + "content": f""" + Timed "{function}" with "{parameters}" was executed. + Summarize the respone while you must stay in character! + Dont mention it was a function call, go by the meaning: + {response} + """, + }, + ) + await self.wingman.add_context(self.wingman.messages) + completion = await self.llm_call(self.wingman.messages) + answer = ( + completion.choices[0].message.content + if completion and completion.choices + else "" + ) + return answer + + async def get_timer_status(self) -> list[dict[str, any]]: + timers = [] + for timer_id, timer in self.timers.items(): + if timer[4] and timer[5] == 0: + continue # skip timers marked for deletion + timers.append( + { + "id": timer_id, + "delay": timer[0], + "is_loop": timer[4], + "remaining_loops": ( + (timer[5] if timer[5] > 0 else "infinite") + if timer[4] + else "N/A" + ), + "remaining_time_in_seconds": round( + max(0, timer[0] - (time.time() - timer[3])) + ), + } + ) + return timers + + async def cancel_timer(self, timer_id: str) -> str: + if timer_id not in self.timers: + return f"Timer with id {timer_id} not found." + # we cant delete it here, because we are iterating over the timers in a sepereate thread + self.timers[timer_id][4] = True + self.timers[timer_id][5] = 0 + return f"Timer with id {timer_id} cancelled.\n\n{await self.get_timer_status()}" + + async def change_timer_settings( + self, timer_id: str, delay: int, is_loop: bool, loops: int + ) -> str: + if timer_id not in self.timers: + return f"Timer with id {timer_id} not found." + self.timers[timer_id][0] = delay + self.timers[timer_id][4] = is_loop + self.timers[timer_id][5] = loops + return f"Timer with id {timer_id} settings changed.\n\n{await self.get_timer_status()}" + + async def reminder(self, message: str = None) -> str: + if not message: + return "This is your reminder, no message was given." + return message diff --git a/templates/migration/1_5_0/skills/typing_assistant/default_config.yaml b/templates/migration/1_5_0/skills/typing_assistant/default_config.yaml new file mode 100644 index 00000000..32c977c6 --- /dev/null +++ b/templates/migration/1_5_0/skills/typing_assistant/default_config.yaml @@ -0,0 +1,27 @@ +name: TypingAssistant +module: skills.typing_assistant.main +category: general +description: + en: Let your Wingman type text for you! + de: Lass deinen Wingman Texte für dich tippen! +hint: + en: Types what you say, either by transcribing your speech or the response to your input. You can use this in Office or messaging programs etc. Not meant for pressing keys or macros - use Wingman commands for that. + de: Tippt, was du sagst, indem deine Spracheingabe oder die Antwort darauf transkribiert werden. Verwende den Skill in Office- oder Messaging-Programmen usw. Er ist nicht für das Drücken einzelner Tasten oder Makros gedacht - dafür kannst du Wingman-Befehle verwenden. +examples: + - question: + en: Type "How are you today?". + de: Tippe "Wie geht's dir heute?". + answer: + en: (types "How are you today?" in active window at the current cursor location) + de: (Tippt "Wie geht's dir heute?" im aktiven Fenster an der aktuellen Cursor-Position ein) + - question: + en: Type a poem about trees. + de: Schreibe ein Gedicht über Bäume. + answer: + en: (imagines a poem about trees and then types it in the active window) + de: (Erfindet ein Gedicht über Bäume und tippt es dann in das aktive Fenster) +prompt: | + You can also type what the user says if they ask you to. The user might dictate what you type, word for word. + The user might also ask you to imagine something, such as a poem, an email, or a speech, and then you type that content. + Use the context of the user's request to determine what content the user wants you to type. + Always use the tool assist_with_typing to type but only type if the user specifically asks you to. diff --git a/templates/migration/1_5_0/skills/typing_assistant/logo.png b/templates/migration/1_5_0/skills/typing_assistant/logo.png new file mode 100644 index 00000000..d896bcf9 Binary files /dev/null and b/templates/migration/1_5_0/skills/typing_assistant/logo.png differ diff --git a/templates/migration/1_5_0/skills/typing_assistant/main.py b/templates/migration/1_5_0/skills/typing_assistant/main.py new file mode 100644 index 00000000..db79fe85 --- /dev/null +++ b/templates/migration/1_5_0/skills/typing_assistant/main.py @@ -0,0 +1,83 @@ +import time +from typing import TYPE_CHECKING +from api.interface import ( + SettingsConfig, + SkillConfig, +) +from api.enums import LogType +from skills.skill_base import Skill +import keyboard.keyboard as keyboard + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class TypingAssistant(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "assist_with_typing", + { + "type": "function", + "function": { + "name": "assist_with_typing", + "description": "Identifies what the user wants the AI to type into an active application window. This may be either transcribing exactly what the user says or typing something the user wants the AI to imagine and then type. Also identifies whether to end the typed content with a press of the Enter / Return key, common typically for typing a response to a chat message or form field.", + "parameters": { + "type": "object", + "properties": { + "content_to_type": { + "type": "string", + "description": "The content the user wants the assistant to type.", + }, + "end_by_pressing_enter": { + "type": "boolean", + "description": "Boolean True/False indicator of whether the typed content should end by pressing the enter key on the keyboard. Default False. Typically True when typing a response in a chat program.", + }, + }, + "required": ["content_to_type"], + }, + }, + }, + ), + ] + return tools + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + function_response = "Error in typing. Can you please try your command again?" + instant_response = "" + + if tool_name == "assist_with_typing": + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing assist_with_typing function with parameters: {parameters}", + color=LogType.INFO, + ) + + content_to_type = parameters.get("content_to_type") + press_enter = parameters.get("end_by_pressing_enter") + + keyboard.write(content_to_type, delay=0.01, hold=0.01) + + if press_enter is True: + keyboard.press("enter") + time.sleep(0.2) + keyboard.release("enter") + + function_response = "Typed user request at active mouse cursor position." + + if self.settings.debug_mode: + await self.print_execution_time() + + return function_response, instant_response diff --git a/templates/migration/1_5_0/skills/uexcorp/default_config.yaml b/templates/migration/1_5_0/skills/uexcorp/default_config.yaml new file mode 100644 index 00000000..c82d742e --- /dev/null +++ b/templates/migration/1_5_0/skills/uexcorp/default_config.yaml @@ -0,0 +1,203 @@ +name: UEXCorp +module: skills.uexcorp.main +category: star_citizen +description: + en: Use the UEXCorp API to get live trading and lore information about ships, locations, commodities and more in Star Citizen. + de: Nutze die UEXCorp API, um live Handels- und Lore-Informationen über Schiffe, Orte, Rohstoffe und mehr in Star Citizen zu erhalten. +examples: + - question: + en: Please provide me a the best two trading routes for my Caterpillar, Im currently at Hurston. + de: Bitte nenne mir die zwei besten Handelsrouten für meine Caterpillar, die derzeit in Hurston steht. + answer: + en: You have two highly profitable trading routes available. The first route involves transporting Recycled Material Composite from Pickers Field on Hurston to Orison - Trade & Development Division on the planet Crusader, offering a profit of 2,148,480 aUEC for 576 SCU of cargo. The second route entails shipping Laranite from HDMS-Lathan on the satellite Arial back to Central Business District in Lorville, resulting in a profit of 297216 aUEC for the same cargo capacity. + de: Du hast zwei hochprofitable Handelsrouten zur Verfügung. Auf der ersten Route transportierst du Recyclingmaterial von Pickers Field auf Hurston nach Orison - Trade & Development Division auf dem Planeten Crusader und erzielst einen Gewinn von 2.148.480 AUEC für 576 SCU Fracht. Auf der zweiten Route wird Laranit von HDMS-Lathan auf dem Satelliten Arial zurück zum Central Business District in Lorville transportiert, was einen Gewinn von 297216 AUEC für die gleiche Frachtkapazität bedeutet. + - question: + en: I got 3000 SCU of Hydrogen loaded in my Hull-C, where can I sell it? + de: Ich habe 3000 SCU Hydrogen in meinem Hull-C geladen. Wo kann ich ihn verkaufen? + answer: + en: You can sell the 3000 SCU of Hydrogen at Baijini Point, located on ArcCorp in the Stanton system. + de: Du kannst die 3000 SCU Hydrogen am Baijini Point auf ArcCorp im Stanton-System verkaufen. + - question: + en: What can you tell me about the Hull-C? + de: Was kannst du mir über den Hull-C erzählen? + answer: + en: The Hull-C is manufactured by Musashi Industrial & Starflight Concern and falls under the 'HULL' series. It serves as a freighter and has a cargo capacity of 4608 SCU. The ship can be purchased at New Deal in Lorville, Hurston for 15,750,000 aUEC. It can accommodate a crew of 1-4 and is designed for trading on suitable space stations with a cargo deck. + de: Die Hull-C wird von Musashi Industrial & Starflight Concern hergestellt und gehört zur "HULL"-Serie. Sie dient als Frachter und hat eine Frachtkapazität von 4608 SCU. Das Schiff kann bei New Deal in Lorville, Hurston für 15.750.000 AUEC erworben werden. Es kann eine Besatzung von 1-4 Personen aufnehmen und ist für den Handel mit geeigneten Raumstationen mit einem Frachtdeck ausgelegt. + - question: + en: What is the difference between the Mole and Prospector? + de: Was ist der Unterschied zwischen dem Maulwurf und dem Prospector? + answer: + en: The MOLE is a mining ship manufactured by ARGO Astronautics, with a capacity for 2-4 crew members and a cargo hold of 96 SCU. It is flyable since version 3.8.0 and is available for purchase at Lorville on Hurston for 5,130,500 aUEC. On the other hand, the Prospector, manufactured by Musashi Industrial & Starflight Concern, is a smaller mining vessel designed for a single crew member, with a cargo capacity of 32 SCU. It is flyable since version 3.0.0 and can be purchased at Lorville for 2,061,000 aUEC. The Prospector is also available for rent at various locations in the Stanton system, unlike the MOLE. + de: Die MOLE ist ein von ARGO Astronautics hergestelltes Bergbauschiff mit einer Kapazität für 2-4 Besatzungsmitglieder und einem Laderaum von 96 SCU. Sie ist seit Version 3.8.0 flugfähig und kann bei Lorville auf Hurston für 5.130.500 AUEC gekauft werden. Der Prospector, hergestellt von Musashi Industrial & Starflight Concern, ist ein kleineres Bergbauschiff für ein einziges Besatzungsmitglied und hat eine Ladekapazität von 32 SCU. Er ist seit Version 3.0.0 flugfähig und kann auf Lorville für 2.061.000 AUEC gekauft werden. Im Gegensatz zur MOLE kann der Prospector auch an verschiedenen Orten im Stanton-System gemietet werden. + - question: + en: What do you know about the commodity Neon? + de: Was weißt du über die Ware Neon? + answer: + en: The commodity 'Neon' is categorized as a drug and is available for purchase at Nuen Waste Management for 7,000 aUEC. It is also sellable at Grim HEX for 8,428 aUEC and Jumptown for 8,805 aUEC. However, it's important to note that Neon is illegal, and engaging in activities involving this commodity may lead to fines and a crimestat. It's advisable to avoid ship scans to mitigate the risk of penalties. + de: Die Ware "Neon" wird als Droge eingestuft und kann bei Nuen Waste Management für 7.000 AUEC gekauft werden. Außerdem kann es bei Grim HEX für 8.428 AUEC und bei Jumptown für 8.805 AUEC verkauft werden. Es ist jedoch wichtig zu wissen, dass Neon illegal ist und dass der Handel mit dieser Ware zu Geldstrafen und einer Strafe führen kann. Es ist ratsam, Schiffsscans zu vermeiden, um das Risiko von Strafen zu minimieren. + - question: + en: What is the best place to buy Laranite? + de: Wo kann ich Laranit am besten kaufen? + answer: + en: The best place to buy Laranite for your Caterpillar is at ArcCorp Mining Area 045, located on the satellite Wala in the Stanton system. It's available for purchase at 2,322 aUEC per SCU. + de: Der beste Ort, um Laranit für deine Caterpillar zu kaufen, ist das ArcCorp-Bergbaugebiet 045, das sich auf dem Satelliten Wala im Stanton-System befindet. Du kannst es für 2.322 AUEC pro SCU kaufen. + - question: + en: Where is the satellite Wala located? + de: Wo befindet sich der Satellit Wala? + answer: + en: The satellite Wala is located in the ArcCorp system, orbiting the planet ArcCorp. It hosts various trading options, including locations like ArcCorp Mining Area 045, ArcCorp Mining Area 048, ArcCorp Mining Area 056, ArcCorp Mining Area 061, Shady Glen Farms, and Samson & Son's Salvage Center. + de: Der Satellit Wala befindet sich im ArcCorp-System und umkreist den Planeten ArcCorp. Er beherbergt verschiedene Handelsmöglichkeiten, darunter Orte wie ArcCorp Mining Area 045, ArcCorp Mining Area 048, ArcCorp Mining Area 056, ArcCorp Mining Area 061, Shady Glen Farms und Samson & Son's Salvage Center. +prompt: | + You also have tools to access the UEXcorp API which you can use to retrieve live trading data and additional information about ships, locations, commodities and more in Star Citizen. + Here are some examples when to use the different tools at your disposal: + + Do not use markdown formatting (e.g. **name**) in your answers, but prefer lists to show multiple options or information. + Do not (never) translate any properties when giving them to the player. They must stay in english or untouched. + Only give functions parameters that were previously clearly provided by a request. Never assume any values, not the current ship, not the location, not the available money, nothing! Always send a None-value instead. + If you are not using one of the definied functions, dont give any trading recommendations. + If you execute a function that requires a commodity name, make sure to always provide the name in english, not in german or any other language. + Never mention optional function (tool) parameters to the user. Only mention the required parameters, if some are missing. + + Samples when to use function "get_trading_routes": + - "Best trading route": Indicates a user's intent to find the best trading route. + - "Trade route": Suggests the user is seeking information on trading routes. + - "Profitable trade route": Implies a focus on finding profitable trading routes. + - "Trading advice": Indicates the user wants guidance on trading decisions. + + Samples when to use function "get_locations_to_sell_to": + - "Sell commodity": Indicates a user's intent to sell a specific item. + - "Best place to sell": Suggests the user is seeking information on optimal selling locations. + - "Seller's market": Implies a focus on finding favorable selling conditions. + - "Selling advice": Indicates the user wants guidance on selling decisions. + - "Seller's guide": Suggests a request for assistance in the selling process. + - "Find buyer": Indicates the user's interest in locating potential buyers. + - "Sell item": Implies a user's intent to sell an item. + - "Sell cargo": Suggests a focus on selling cargo or goods. + - "Offload inventory": Signals the intention to sell available inventory. + + Samples when to use function "get_locations_to_buy_from": + - "Buy commodity": Indicates a user's intent to purchase a specific item. + - "Best place to buy": Suggests the user is seeking information on optimal buying locations. + - "Buyer's market": Implies a focus on finding favorable buying conditions. + - "Purchase location": Signals interest in identifying a suitable location for buying. + - "Buying advice": Indicates the user wants guidance on purchasing decisions. + - "Buyer's guide": Suggests a request for assistance in the buying process. + + Samples when to use function "get_location_information": + - "Location information": Indicates a user's intent to gather information about a specific location. + - "Location details": Suggests the user is seeking detailed information about a specific location. + + Samples when to use function "get_ship_information": + - "Ship information": Indicates a user's intent to gather information about a specific ship. + - "Ship details": Suggests the user is seeking detailed information about a specific ship. + + Samples when to use function "get_ship_comparison ": + - "Ship comparison": Indicates a user's intent to compare two ships. And everytime at least two ships are mentioned in the request. + + Samples when to use function "get_commodity_information": + - "Commodity information": Indicates a user's intent to gather information about a specific commodity. + - "Commodity details": Suggests the user is seeking detailed information about a specific commodity. + - "Commodity prices": Implies a focus on obtaining current prices for a specific commodity. + + Samples when to use function "reload_current_commodity_prices" (Great to use before retrieving sell, buy and trade options): + - "Update commodity prices": Indicates a user's intent to update the commodity prices. + - "Get current prices": Suggests the user is seeking the current commodity prices. + - "Refresh prices": Implies a focus on updating the commodity prices. +custom_properties: + - id: uexcorp_api_url + name: API URL + hint: The URL of the UEX corp API. + value: https://uexcorp.space/api/2.0/ + required: true + property_type: string + - id: uexcorp_api_timeout + name: API Timeout + hint: The timeout for the UEX corp API and, if used, the starcitizen wiki api in seconds. (If set below 3s, 3s will be used.) + value: 6 + required: true + property_type: number + - id: uexcorp_api_timeout_retries + name: API Timeout Retries + hint: How often the request to the uexcorp api should be retried in case of a timeout. (Timeout setting may increase automatically on each retry.) + value: 3 + required: true + property_type: number + # Set this option to "true" to enable caching of the UEX corp API responses. This is recommended, as the API key's quota is very limited. + # If you set this option to "false", the Wingman will fetch all data from the UEX corp API on every start. + # If you want to update the prices, just tell the Wingman to do so. + # If all data should be fetched again, delete the cache file. (wingman_data\uexcorp\cache.json) + - id: uexcorp_cache + name: Enable Cache + hint: Set this option to "true" to enable caching of the UEX corp API responses. This is recommended, as the API key's quota is very limited. + value: true + required: true + property_type: boolean + + # Set this option to the amount of seconds you want to cache the UEX corp API responses. + # We recommend a day ("86400"), as the ship, planet, etc. information does not change that often. + - id: uexcorp_cache_duration + name: Cache Duration + hint: Set this option to the amount of seconds you want to cache the UEX corp API responses. We recommend a day ("86400"). + value: 86400 + required: true + property_type: number + + # Set this option to "true" to show only one of the most profitable routes for each commodity. + # Set this option to "false" to show all routes. This may include multiple routes for the same commodity. + # Recommended: "true" + - id: uexcorp_summarize_routes_by_commodity + name: Summarize Routes by Commodity + hint: Set this option to "true" to show only the most profitable routes per commodity. "false" shows multiple options per commodity. + value: true + required: true + property_type: boolean + + # Set this option to "true" to make the start location for trade route calculation a mandatory information. + # Set this option to "false" to make the start location for trade route calculation a optional information. + # If "false" and no start location is given, all tradeports are taken into account. + - id: uexcorp_tradestart_mandatory + name: Trade Start Mandatory + hint: Set this option to "true" to make the start location for trade route calculation a mandatory information. If "false" and no start location is given, all tradeports are taken into account. + value: true + required: true + property_type: boolean + + # Use this to blacklist certain trade ports or commodities or combinations of both. + # Default value is '[]', which means no trade ports or commodities are blacklisted. + # If we want to add a trade port to the blacklist, we add something like this: {"tradeport":"Baijini Point"} This will blacklist the trade port completely from trade route calculations. + # If we want to add a commodity to the blacklist, we add something like this: {"commodity":"Medical Supplies"} This will blacklist the commodity completely from trade route calculations. + # If we want to add a combination to the blacklist, we add something like this: {"tradeport":"Baijini Point", "commodity":"Medical Supplies"} This will blacklist this commodity for the given trade port. + # If we want to add multiple trade ports or commodities or combinations of both, we add them in a list like this: [{"tradeport":"Baijini Point", "commodity":"Medical Supplies"}, {"commodity":"Medical Supplies"}, {"tradeport":"Port Tressler"}] + # This value is a JSON string, if you have created a list, use a JSON validator like https://jsonlint.com/ to check if the list is valid. + - id: uexcorp_trade_blacklist + name: Trade Blacklist + hint: JSON string to blacklist certain trade ports or commodities or combinations of both. Default value is empty ('[]'). Sample -> [{"tradeport":"Baijini Point", "commodity":"Medical Supplies"}, {"commodity":"Medical Supplies"}, {"tradeport":"Port Tressler"}] + value: "[]" + required: true + property_type: string + + # Set this option to the amount of trade routes you want to show at default. + # You can always tell Wingman AI to show more or less trade routes for one request, if that number is not given, this setting is used. + - id: uexcorp_default_trade_route_count + name: Default Trade Route Count + hint: Set this option to the amount of trade routes you want to show at default. + value: 1 + required: true + property_type: number + + # Set this option to true to take estimated scu availability into account for trade route calculations. + # This will reduce the amount of trade routes shown, but will give you more accurate results. + - id: uexcorp_use_estimated_availability + name: Use Estimated Availability + hint: Enable this option to take estimated scu availability into account for trade route calculations. + value: true + required: true + property_type: boolean + + # Set this option to true to get additional lore information from api.star-citizen.wiki on ships, locations, commodities and more. + - id: uexcorp_add_lore + name: Add Lore Information + hint: Enable this to retrieve additional lore information from api.star-citizen.wiki on ships, locations, commodities and more. But this will add a delay to the response. + value: false + required: true + property_type: boolean diff --git a/templates/migration/1_5_0/skills/uexcorp/logo.png b/templates/migration/1_5_0/skills/uexcorp/logo.png new file mode 100644 index 00000000..94392d3f Binary files /dev/null and b/templates/migration/1_5_0/skills/uexcorp/logo.png differ diff --git a/templates/migration/1_5_0/skills/uexcorp/main.py b/templates/migration/1_5_0/skills/uexcorp/main.py new file mode 100644 index 00000000..630247d1 --- /dev/null +++ b/templates/migration/1_5_0/skills/uexcorp/main.py @@ -0,0 +1,3185 @@ +import asyncio +import difflib +import heapq +import itertools +import json +import math +import traceback +from os import path +import collections +import re +from typing import Optional, TYPE_CHECKING +from datetime import datetime +import requests +from api.enums import LogType, WingmanInitializationErrorType +from api.interface import SettingsConfig, SkillConfig, WingmanInitializationError +from services.file import get_writable_dir +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class UEXCorp(Skill): + """Wingman AI Skill to utalize uexcorp api for trade recommendations""" + + # enable for verbose logging + DEV_MODE = False + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + self.data_path = get_writable_dir(path.join("skills", "uexcorp", "data")) + self.logfileerror = path.join(self.data_path, "error.log") + self.logfiledebug = path.join(self.data_path, "debug.log") + self.cachefile = path.join(self.data_path, "cache.json") + + self.skill_version = "v13" + self.skill_loaded = False + self.skill_loaded_asked = False + self.game_version = "unknown" + + # init of config options + self.uexcorp_api_url: str = None + # self.uexcorp_api_key: str = None + self.uexcorp_api_timeout: int = None + self.uexcorp_api_timeout_retries: int = None + self.uexcorp_cache: bool = None + self.uexcorp_cache_duration: int = None + self.uexcorp_summarize_routes_by_commodity: bool = None + self.uexcorp_tradestart_mandatory: bool = None + self.uexcorp_trade_blacklist = [] + self.uexcorp_default_trade_route_count: int = None + self.uexcorp_use_estimated_availability: bool = None + self.uexcorp_add_lore: bool = None + + self.ships = [] + self.ship_names = [] + self.ship_dict = {} + self.ship_code_dict = {} + + self.commodities = [] + self.commodity_names = [] + self.commodity_dict = {} + self.commodity_code_dict = {} + + self.systems = [] + self.system_names = [] + self.system_dict = {} + self.system_code_dict = {} + + self.terminals = [] + self.terminal_names = [] + self.terminal_names_trading = [] + self.terminal_dict = {} + self.terminal_code_dict = {} + self.terminals_by_system = collections.defaultdict(list) + self.terminals_by_planet = collections.defaultdict(list) + self.terminals_by_moon = collections.defaultdict(list) + self.terminals_by_city = collections.defaultdict(list) + + self.planets = [] + self.planet_names = [] + self.planet_dict = {} + self.planet_code_dict = {} + self.planets_by_system = collections.defaultdict(list) + + self.moons = [] + self.moon_names = [] + self.moon_dict = {} + self.moon_code_dict = {} + self.moons_by_planet = collections.defaultdict(list) + + self.cities = [] + self.city_names = [] + self.city_dict = {} + self.city_code_dict = {} + self.cities_by_planet = collections.defaultdict(list) + + self.location_names_set = set() + self.location_names_set_trading = set() + + self.cache_enabled = True + self.cache = { + "function_args": {}, + "search_matches": {}, + "readable_objects": {}, + } + + self.dynamic_context = "" + + async def _print( + self, message: str | dict, is_extensive: bool = False, is_debug: bool = True + ) -> None: + """ + Prints a message if debug mode is enabled. Will be sent to the server terminal, log file and client. + + Args: + message (str | dict): The message to be printed. + is_extensive (bool, optional): Whether the message is extensive. Defaults to False. + + Returns: + None + """ + if (not is_extensive and self.settings.debug_mode) or not is_debug: + await self.printr.print_async( + message, + color=LogType.INFO, + ) + elif self.DEV_MODE: + with open(self.logfiledebug, "a", encoding="UTF-8") as f: + f.write(f"#### Time: {datetime.now()} ####\n") + f.write(f"{message}\n\n") + + def _log(self, message: str | dict, is_extensive: bool = False) -> None: + """ + Prints a debug message (synchronously) only on the server (and in the log file). + + Args: + message (str | dict): The message to be printed. + is_extensive (bool, optional): Whether the message is extensive. Defaults to False. + + Returns: + None + """ + if not is_extensive and self.settings.debug_mode: + self.printr.print( + message, + color=LogType.INFO, + server_only=True, + ) + elif self.DEV_MODE: + with open(self.logfiledebug, "a", encoding="UTF-8") as f: + f.write(f"#### Time: {datetime.now()} ####\n") + f.write(f"{message}\n\n") + + def _get_function_arg_from_cache( + self, arg_name: str, arg_value: str | int = None + ) -> str | int | None: + """ + Retrieves a function argument from the cache if available, otherwise returns the provided argument value. + + Args: + arg_name (str): The name of the function argument. + arg_value (str | int, optional): The default value for the argument. Defaults to None. + + Returns: + dict[str, any]: The cached value of the argument if available, otherwise the provided argument value. + """ + if not self.cache_enabled: + return arg_value + + if arg_value is None or ( + isinstance(arg_value, str) and arg_value.lower() == "current" + ): + cached_arg = self.cache["function_args"].get(arg_name) + if cached_arg is not None: + self._log( + f"'{arg_name}' was not given and got overwritten by cache: {cached_arg}" + ) + return cached_arg + + return arg_value + + def _set_function_arg_to_cache( + self, arg_name: str, arg_value: str | int | float = None + ) -> None: + """ + Sets the value of a function argument to the cache. + + Args: + arg_name (str): The name of the argument. + arg_value (str | int, optional): The value of the argument. Defaults to None. + """ + if not self.cache_enabled: + return + + function_args = self.cache["function_args"] + old_value = function_args.get(arg_name, "None") + + if arg_value is not None: + self._log( + f"Set function arg '{arg_name}' to cache. Previous value: {old_value} >> New value: {arg_value}", + True, + ) + function_args[arg_name] = arg_value + elif arg_name in function_args: + self._log( + f"Removing function arg '{arg_name}' from cache. Previous value: {old_value}", + True, + ) + function_args.pop(arg_name, None) + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + # self.uexcorp_api_key = await self.retrieve_secret( + # "uexcorp", errors, "You can create your own API key here: https://uexcorp.space/api/apps/" + # ) + self.uexcorp_api_url = self.retrieve_custom_property_value( + "uexcorp_api_url", errors + ) + self.uexcorp_api_timeout = self.retrieve_custom_property_value( + "uexcorp_api_timeout", errors + ) + self.uexcorp_api_timeout_retries = self.retrieve_custom_property_value( + "uexcorp_api_timeout_retries", errors + ) + self.uexcorp_cache = self.retrieve_custom_property_value( + "uexcorp_cache", errors + ) + self.uexcorp_cache_duration = self.retrieve_custom_property_value( + "uexcorp_cache_duration", errors + ) + self.uexcorp_summarize_routes_by_commodity = ( + self.retrieve_custom_property_value( + "uexcorp_summarize_routes_by_commodity", errors + ) + ) + self.uexcorp_tradestart_mandatory = self.retrieve_custom_property_value( + "uexcorp_tradestart_mandatory", errors + ) + self.uexcorp_default_trade_route_count = self.retrieve_custom_property_value( + "uexcorp_default_trade_route_count", errors + ) + self.uexcorp_use_estimated_availability = self.retrieve_custom_property_value( + "uexcorp_use_estimated_availability", errors + ) + self.uexcorp_add_lore = self.retrieve_custom_property_value( + "uexcorp_add_lore", errors + ) + + trade_backlist_str: str = self.retrieve_custom_property_value( + "uexcorp_trade_blacklist", errors + ) + if trade_backlist_str: + try: + self.uexcorp_trade_blacklist = json.loads(trade_backlist_str) + except json.decoder.JSONDecodeError: + errors.append( + WingmanInitializationError( + wingman_name=self.name, + message="Invalid custom property 'uexcorp_trade_blacklist' in config. Value must be a valid JSON string.", + error_type=WingmanInitializationErrorType.INVALID_CONFIG, + ) + ) + + try: + await self._start_loading_data() + except Exception as e: + errors.append( + WingmanInitializationError( + wingman_name=self.name, + message=f"Failed to load data: {e}", + error_type=WingmanInitializationErrorType.UNKNOWN, + ) + ) + + return errors + + async def _load_data(self, reload_prices: bool = False, callback=None) -> None: + """ + Load data for UEX corp wingman. + + Args: + reload (bool, optional): Whether to reload the data from the source. Defaults to False. + """ + + if reload_prices: + await self._load_commodity_prices() + self.threaded_execution(self._save_to_cachefile) + return + + self.game_version = (await self._fetch_uex_data("game_versions"))["live"] + + async def _load_from_cache(): + if not self.uexcorp_cache: + return + + # check file age + data = {} + try: + with open(self.cachefile, "r", encoding="UTF-8") as f: + data = json.load(f) + except (FileNotFoundError, json.decoder.JSONDecodeError): + pass + + # check file age + if ( + data.get("timestamp") + and data.get("timestamp") + self.uexcorp_cache_duration + > self._get_timestamp() + and data.get("skill_version") == self.skill_version + and data.get("game_version") == self.game_version + ): + if data.get("ships"): + self.ships = data["ships"] + if data.get("commodities"): + self.commodities = data["commodities"] + if data.get("systems"): + self.systems = data["systems"] + if data.get("terminals"): + self.terminals = data["terminals"] + # fix prices keys (from string to integer due to unintentional json conversion) + for terminal in self.terminals: + if "prices" in terminal: + terminal["prices"] = { + int(key): value + for key, value in terminal["prices"].items() + } + if data.get("planets"): + self.planets = data["planets"] + if data.get("moons"): + self.moons = data["moons"] + if data.get("cities"): + self.cities = data["cities"] + + async def _load_missing_data(): + load_purchase_and_rental = False + + if not self.ships: + load_purchase_and_rental = True + self.ships = await self._fetch_uex_data("vehicles") + self.ships = [ship for ship in self.ships if ship["game_version"]] + + if not self.commodities: + self.commodities = await self._fetch_uex_data("commodities") + self.commodities = [ + commodity + for commodity in self.commodities + if commodity["is_available"] == 1 + ] + + if not self.systems: + load_purchase_and_rental = True + self.systems = await self._fetch_uex_data("star_systems") + self.systems = [ + system for system in self.systems if system["is_available"] == 1 + ] + for system in self.systems: + self.terminals += await self._fetch_uex_data( + f"terminals/id_star_system/{system['id']}/is_available/1/is_visible/1" + ) + self.cities += await self._fetch_uex_data( + f"cities/id_star_system/{system['id']}" + ) + self.moons += await self._fetch_uex_data( + f"moons/id_star_system/{system['id']}" + ) + self.planets += await self._fetch_uex_data( + f"planets/id_star_system/{system['id']}" + ) + await self._load_commodity_prices() + + # data manipulation + planet_codes = [] + for planet in self.planets: + if planet["code"] not in planet_codes: + planet_codes.append(planet["code"]) + + for terminal in self.terminals: + if terminal["id_space_station"]: + parts = terminal["nickname"].split(" ") + for part in parts: + if ( + len(part.split("-")) == 2 + and part.split("-")[0] in planet_codes + and re.match(r"^L\d+$", part.split("-")[1]) + ): + terminal["id_planet"] = "" + break + + if load_purchase_and_rental: + await self._load_purchase_and_rental() + + async def _load_data(callback=None): + await _load_from_cache() + await _load_missing_data() + self.threaded_execution(self._save_to_cachefile) + + if callback: + await callback() + + self.threaded_execution(_load_data, callback) + + async def _save_to_cachefile(self) -> None: + if ( + self.uexcorp_cache + and self.uexcorp_cache_duration > 0 + and self.ships + and self.commodities + and self.systems + and self.terminals + and self.planets + and self.moons + and self.cities + ): + data = { + "timestamp": self._get_timestamp(), + "skill_version": self.skill_version, + "game_version": self.game_version, + "ships": self.ships, + "commodities": self.commodities, + "systems": self.systems, + "terminals": self.terminals, + "planets": self.planets, + "moons": self.moons, + "cities": self.cities, + } + with open(self.cachefile, "w", encoding="UTF-8") as f: + json.dump(data, f, indent=4) + + async def _load_purchase_and_rental(self) -> None: + """ + Load purchase and rental information for ships and vehicles. + + Returns: + None + """ + ships_purchase = await self._fetch_uex_data("vehicles_purchases_prices_all") + ships_rental = await self._fetch_uex_data("vehicles_rentals_prices_all") + + for ship in self.ships: + ship["purchase"] = [ + purchase + for purchase in ships_purchase + if purchase["id_vehicle"] == ship["id"] + ] + ship["rental"] = [ + rental for rental in ships_rental if rental["id_vehicle"] == ship["id"] + ] + + for terminal in self.terminals: + terminal["vehicle_rental"] = [ + rental + for rental in ships_rental + if rental["id_terminal"] == terminal["id"] + ] + terminal["vehicle_purchase"] = [ + purchase + for purchase in ships_purchase + if purchase["id_terminal"] == terminal["id"] + ] + + async def _load_commodity_prices(self) -> None: + """ + Load commodity prices from UEX corp API. + + Returns: + None + """ + + self.cache["readable_objects"] = {} + + # currently the prices are saved in api v1 style to minimize rework time for now + for i in range(0, len(self.terminals), 10): + terminals_batch = self.terminals[i : i + 10] + terminal_ids = [ + terminal["id"] + for terminal in terminals_batch + if terminal["type"] in ["commodity", "commodity_raw"] + ] + if not terminal_ids: + continue + + commodity_prices = await self._fetch_uex_data( + "commodities_prices/id_terminal/" + ",".join(map(str, terminal_ids)) + ) + + for terminal in terminals_batch: + terminal["prices"] = {} + + for commodity_price in commodity_prices: + if commodity_price["id_terminal"] == terminal["id"]: + commodity = next( + ( + commodity + for commodity in self.commodities + if commodity["id"] == commodity_price["id_commodity"] + ), + None, + ) + if commodity: + transaction_type = ( + "buy" if commodity_price["price_buy"] > 0 else "sell" + ) + price = { + "name": self._format_commodity_name(commodity), + "kind": commodity["kind"], + "operation": transaction_type, + "price_buy": commodity_price["price_buy"], + "price_sell": commodity_price["price_sell"], + "date_update": commodity_price["date_modified"], + "is_updated": bool(commodity_price["date_modified"]), + "scu": commodity_price[f"scu_{transaction_type}"] + or None, + "scu_average": commodity_price[ + f"scu_{transaction_type}_avg" + ] + or None, + "scu_average_week": commodity_price[ + f"scu_{transaction_type}_avg_week" + ] + or None, + } + # calculate expected scu + count = 0 + total = 0 + if price["scu"]: + count += 2 + total += price["scu"] * 2 + if price["scu_average"]: + count += 1 + total += price["scu_average"] + if price["scu_average_week"]: + count += 1 + total += price["scu_average_week"] + price["scu_expected"] = ( + int(total / count) if count > 0 else None + ) + + terminal["prices"][commodity["id"]] = price + + async def _start_loading_data(self) -> None: + """ + Prepares the wingman for execution by initializing necessary variables and loading data. + + This method retrieves configuration values, sets up API URL and timeout, and loads data + such as ship names, commodity names, system names, terminal names, city names, + moon names and planet names. + It also adds additional context information for function parameters. + + Returns: + None + """ + + # fix api url + if self.uexcorp_api_url and self.uexcorp_api_url.endswith("/"): + self.uexcorp_api_url = self.uexcorp_api_url[:-1] + + # fix timeout + self.uexcorp_api_timeout = max(3, self.uexcorp_api_timeout) + self.uexcorp_api_timeout_retries = max(0, self.uexcorp_api_timeout_retries) + + await self._load_data(False, self._prepare_data) + + async def _prepare_data(self) -> None: + """ + Prepares the wingman for execution by initializing necessary variables. + """ + + self.planets = [ + planet for planet in self.planets if planet["is_available"] == 1 + ] + + self.moons = [moon for moon in self.moons if moon["is_available"] == 1] + + # remove urls from ships + for ship in self.ships: + ship.pop("url_store", None) + ship.pop("url_brochure", None) + ship.pop("url_hotsite", None) + ship.pop("url_video", None) + ship.pop("url_photos", None) + + # remove screenshot from terminals + for terminal in self.terminals: + terminal.pop("screenshot", None) + terminal.pop("screenshot_thumbnail", None) + terminal.pop("screenshot_author", None) + + # add hull trading option to trade ports + for terminal in self.terminals: + terminal["hull_trading"] = bool(terminal["has_loading_dock"]) + + # add hull trading option to ships + ships_for_hull_trading = [ + "Hull C", + "Hull D", + "Hull E", + ] + for ship in self.ships: + ship["hull_trading"] = ship["name"] in ships_for_hull_trading + + self.ship_names = [self._format_ship_name(ship) for ship in self.ships] + self.ship_dict = { + self._format_ship_name(ship).lower(): ship for ship in self.ships + } + self.ship_code_dict = {ship["id"]: ship for ship in self.ships} + + self.commodity_names = [ + self._format_commodity_name(commodity) for commodity in self.commodities + ] + self.commodity_dict = { + self._format_commodity_name(commodity).lower(): commodity + for commodity in self.commodities + } + self.commodity_code_dict = { + commodity["id"]: commodity for commodity in self.commodities + } + + self.system_names = [ + self._format_system_name(system) for system in self.systems + ] + self.system_dict = { + self._format_system_name(system).lower(): system for system in self.systems + } + self.system_code_dict = {system["id"]: system for system in self.systems} + + self.terminal_names = [ + self._format_terminal_name(terminal) for terminal in self.terminals + ] + self.terminal_names_trading = [ + self._format_terminal_name(terminal) + for terminal in self.terminals + if terminal["type"] in ["commodity", "commodity_raw"] + ] + self.terminal_dict = { + self._format_terminal_name(terminal).lower(): terminal + for terminal in self.terminals + } + self.terminal_code_dict = { + terminal["id"]: terminal for terminal in self.terminals + } + for terminal in self.terminals: + if terminal["id_star_system"]: + self.terminals_by_system[terminal["id_star_system"]].append(terminal) + if terminal["id_planet"]: + self.terminals_by_planet[terminal["id_planet"]].append(terminal) + if terminal["id_moon"]: + self.terminals_by_moon[terminal["id_moon"]].append(terminal) + if terminal["id_city"]: + self.terminals_by_city[terminal["id_city"]].append(terminal) + + self.city_names = [self._format_city_name(city) for city in self.cities] + self.city_dict = { + self._format_city_name(city).lower(): city for city in self.cities + } + self.city_code_dict = {city["id"]: city for city in self.cities} + for city in self.cities: + self.cities_by_planet[city["id_planet"]].append(city) + + self.moon_names = [self._format_moon_name(moon) for moon in self.moons] + self.moon_dict = { + self._format_moon_name(moon).lower(): moon for moon in self.moons + } + self.moon_code_dict = {moon["id"]: moon for moon in self.moons} + for moon in self.moons: + self.moons_by_planet[moon["id_planet"]].append(moon) + + self.planet_names = [ + self._format_planet_name(planet) for planet in self.planets + ] + self.planet_dict = { + self._format_planet_name(planet).lower(): planet for planet in self.planets + } + self.planet_code_dict = {planet["id"]: planet for planet in self.planets} + for planet in self.planets: + self.planets_by_system[planet["id_star_system"]].append(planet) + + self.location_names_set = set( + self.system_names + + self.terminal_names + + self.city_names + + self.moon_names + + self.planet_names + ) + self.location_names_set_trading = set( + self.system_names + + self.terminal_names_trading + + self.city_names + + self.moon_names + + self.planet_names + ) + + self.skill_loaded = True + if self.skill_loaded_asked: + self.skill_loaded_asked = False + await self._print("UEXcorp skill data loading complete.", False, False) + + def add_context(self, content: str): + """ + Adds additional context to the first message content, + that represents the context given to open ai. + + Args: + content (str): The additional context to be added. + + Returns: + None + """ + self.dynamic_context += "\n" + content + + def _get_timestamp(self) -> int: + """ + Get the current timestamp as an integer. + + Returns: + int: The current timestamp. + """ + return int(datetime.now().timestamp()) + + def _get_header(self): + """ + Returns the header dictionary containing the API key. + Used for API requests. + + Returns: + dict: The header dictionary with the API key. + """ + return {} # no header needed anymore for currently used endpoints + key = self.uexcorp_api_key + return {"Authorization": f"Bearer {key}"} + + async def _fetch_uex_data( + self, endpoint: str, params: Optional[dict[str, any]] = None + ) -> list[dict[str, any]]: + """ + Fetches data from the specified endpoint. + + Args: + endpoint (str): The API endpoint to fetch data from. + params (Optional[dict[str, any]]): Optional parameters to include in the request. + + Returns: + list[dict[str, any]]: The fetched data as a list of dictionaries. + """ + url = f"{self.uexcorp_api_url}/{endpoint}" + await self._print(f"Fetching data from {url} ...", True) + + request_count = 1 + timeout_error = False + requests_error = False + + while request_count == 1 or ( + request_count <= (self.uexcorp_api_timeout_retries + 1) and timeout_error + ): + if requests_error: + await self._print(f"Retrying request #{request_count}...", True) + requests_error = False + + timeout_error = False + try: + response = requests.get( + url, + params=params, + timeout=(self.uexcorp_api_timeout * request_count), + headers=self._get_header(), + ) + response.raise_for_status() + except requests.exceptions.RequestException as e: + await self._print(f"Error while retrieving data from {url}: {e}") + requests_error = True + if isinstance(e, requests.exceptions.Timeout): + timeout_error = True + request_count += 1 + + if requests_error: + return [] + + response_json = response.json() + if "status" not in response_json or response_json["status"] != "ok": + await self._print(f"Error while retrieving data from {url}") + return [] + + return response_json.get("data", []) + + async def _fetch_lore(self, search: str) -> dict[str, any]: + """ + Fetches data for a search query. + + Args: + search (str): The search query to fetch data for. + + Returns: + dict[str, any]: The fetched data as a dictionary. + """ + url = "https://api.star-citizen.wiki/api/v2/galactapedia/search" + await self._print( + f"Fetching data from SC wiki ({url}) for search query '{search}' ...", True + ) + + request_count = 1 + max_retries = 2 + timeout_error = False + requests_error = False + + while request_count == 1 or ( + request_count <= (max_retries + 1) and timeout_error + ): + if requests_error: + await self._print(f"Retrying request #{request_count}...", True) + requests_error = False + + timeout_error = False + try: + response = requests.post( + url, + headers={ + "accept": "application/json", + "Content-Type": "application/json", + }, + json={"query": search}, + timeout=self.uexcorp_api_timeout, + ) + response.raise_for_status() + except requests.exceptions.RequestException as e: + await self._print(f"Error while retrieving data from {url}: {e}") + requests_error = True + if isinstance(e, requests.exceptions.Timeout): + timeout_error = True + request_count += 1 + + if requests_error: + return {} + + try: + response_json = response.json() + except json.decoder.JSONDecodeError as e: + await self._print(f"Error while retrieving data from {url}: {e}") + return None + + max_articles = 5 + min_articles = 3 + max_length = 2000 + max_time = self.uexcorp_api_timeout + loaded_articles = [] + articles = response_json.get("data", [])[:max_articles] + + async def _load_article(article_id: str): + article_url = ( + f"https://api.star-citizen.wiki/api/v2/galactapedia/{article_id}" + ) + try: + article_response = requests.get( + article_url, timeout=self.uexcorp_api_timeout + ) + article_response.raise_for_status() + except requests.exceptions.RequestException as e: + await self._print( + f"Error while retrieving data from {article_url}: {e}" + ) + return None + + try: + article_response_json = article_response.json() + except json.decoder.JSONDecodeError as e: + await self._print( + f"Error while retrieving data from {article_url}: {e}" + ) + return None + + response = ( + article_response_json.get("data", {}) + .get("translations", {}) + .get("en_EN", "") + ) + # scrap links and only keep link name for example [link name](link) + response = re.sub(r"\[([^\]]+)\]\([^)]+\)", r"\1", response) + response = re.sub(r"\n", " ", response) + response = re.sub(r"\s+", " ", response) + response = response[:max_length] + loaded_articles.append(response) + + start_time = datetime.now() + for article in articles: + self.threaded_execution(_load_article, article["id"]) + + while ( + len(loaded_articles) < min_articles + and (datetime.now() - start_time).seconds < max_time + ): + await asyncio.sleep(0.1) + + return loaded_articles + + def _format_ship_name(self, ship: dict[str, any], full_name: bool = True) -> str: + """ + Formats the name of a ship. + This represents a list of names that can be used by the player. + So if you like to use manufacturer names + ship names, do it here. + + Args: + ship (dict[str, any]): The ship dictionary containing the ship details. + + Returns: + str: The formatted ship name. + """ + if full_name: + return ship["name_full"] + + return ship["name"] + + def _format_terminal_name( + self, terminal: dict[str, any], full_name: bool = False + ) -> str: + """ + Formats the name of a terminal. + + Args: + terminal (dict[str, any]): The terminal dictionary containing the name. + + Returns: + str: The formatted terminal name. + """ + if full_name: + return terminal["name"] + + return terminal["nickname"] + + def _format_city_name(self, city: dict[str, any]) -> str: + """ + Formats the name of a city. + + Args: + city (dict[str, any]): A dictionary representing a city. + + Returns: + str: The formatted name of the city. + """ + return city["name"] + + def _format_planet_name(self, planet: dict[str, any]) -> str: + """ + Formats the name of a planet. + + Args: + planet (dict[str, any]): A dictionary representing a planet. + + Returns: + str: The formatted name of the planet. + """ + return planet["name"] + + def _format_moon_name(self, moon: dict[str, any]) -> str: + """ + Formats the name of a moon. + + Args: + moon (dict[str, any]): The moon dictionary. + + Returns: + str: The formatted moon name. + """ + return moon["name"] + + def _format_system_name(self, system: dict[str, any]) -> str: + """ + Formats the name of a system. + + Args: + system (dict[str, any]): The system dictionary containing the name. + + Returns: + str: The formatted system name. + """ + return system["name"] + + def _format_commodity_name(self, commodity: dict[str, any]) -> str: + """ + Formats the name of a commodity. + + Args: + commodity (dict[str, any]): The commodity dictionary. + + Returns: + str: The formatted commodity name. + """ + return commodity["name"] + + def get_tools(self) -> list[tuple[str, dict]]: + trading_routes_optional = [ + "money_to_spend", + "free_cargo_space", + "position_end_name", + "commodity_name", + "illegal_commodities_allowed", + "maximal_number_of_routes", + ] + trading_routes_required = ["ship_name"] + + if self.uexcorp_tradestart_mandatory: + trading_routes_required.append("position_start_name") + else: + trading_routes_optional.append("position_start_name") + + tools = [ + ( + "get_trading_routes", + { + "type": "function", + "function": { + "name": "get_trading_routes", + "description": "Finds all possible commodity trade options and gives back a selection of the best trade routes. Needs ship name and start position.", + "parameters": { + "type": "object", + "properties": { + "ship_name": {"type": "string"}, + "position_start_name": {"type": "string"}, + "money_to_spend": {"type": "number"}, + "position_end_name": {"type": "string"}, + "free_cargo_space": {"type": "number"}, + "commodity_name": {"type": "string"}, + "illegal_commodities_allowed": {"type": "boolean"}, + "maximal_number_of_routes": {"type": "number"}, + }, + "required": trading_routes_required, + "optional": trading_routes_optional, + }, + }, + }, + ), + ( + "get_locations_to_sell_to", + { + "type": "function", + "function": { + "name": "get_locations_to_sell_to", + "description": "Finds the best locations at what the player can sell cargo at. Only give position_name if the player specifically wanted to filter for it. Needs commodity name.", + "parameters": { + "type": "object", + "properties": { + "commodity_name": {"type": "string"}, + "ship_name": {"type": "string"}, + "position_name": {"type": "string"}, + "commodity_amount": {"type": "number"}, + "maximal_number_of_locations": {"type": "number"}, + }, + "required": ["commodity_name"], + "optional": [ + "ship_name", + "position_name", + "commodity_amount", + "maximal_number_of_locations", + ], + }, + }, + }, + ), + ( + "get_locations_to_buy_from", + { + "type": "function", + "function": { + "name": "get_locations_to_buy_from", + "description": "Finds the best locations at what the player can buy cargo at. Only give position_name if the player specifically wanted to filter for it. Needs commodity name.", + "parameters": { + "type": "object", + "properties": { + "commodity_name": {"type": "string"}, + "ship_name": {"type": "string"}, + "position_name": {"type": "string"}, + "commodity_amount": {"type": "number"}, + "maximal_number_of_locations": {"type": "number"}, + }, + "required": ["commodity_name"], + "optional": [ + "ship_name", + "position_name", + "commodity_amount", + "maximal_number_of_locations", + ], + }, + }, + }, + ), + ( + "get_location_information", + { + "type": "function", + "function": { + "name": "get_location_information", + "description": "Gives information and commodity prices of this location. Execute this if the player asks for all buy or sell options for a specific location.", + "parameters": { + "type": "object", + "properties": { + "location_name": {"type": "string"}, + }, + "required": ["location_name"], + }, + }, + }, + ), + ( + "get_ship_information", + { + "type": "function", + "function": { + "name": "get_ship_information", + "description": "Gives information about the given ship. If a player asks to rent something or buy a ship, this function needs to be executed.", + "parameters": { + "type": "object", + "properties": { + "ship_name": {"type": "string"}, + }, + "required": ["ship_name"], + }, + }, + }, + ), + ( + "get_ship_comparison", + { + "type": "function", + "function": { + "name": "get_ship_comparison", + "description": "Gives information about given ships. Also execute this function if the player asks for a ship information on multiple ships or a model series.", + "parameters": { + "type": "object", + "properties": { + "ship_names": { + "type": "array", + "items": {"type": "string"}, + }, + }, + "required": ["ship_names"], + }, + }, + }, + ), + ( + "get_commodity_information", + { + "type": "function", + "function": { + "name": "get_commodity_information", + "description": "Gives information about the given commodity. If a player asks for information about a commodity, this function needs to be executed.", + "parameters": { + "type": "object", + "properties": { + "commodity_name": {"type": "string"}, + }, + "required": ["commodity_name"], + }, + }, + }, + ), + ( + "get_commodity_prices_and_terminals", + { + "type": "function", + "function": { + "name": "get_commodity_prices_and_terminals", + "description": "Gives information about the given commodity and its buy and sell offers. If a player asks for buy and sell information or locations on a commodity, this function needs to be executed.", + "parameters": { + "type": "object", + "properties": { + "commodity_name": {"type": "string"}, + }, + "required": ["commodity_name"], + }, + }, + }, + ), + ( + "reload_current_commodity_prices", + { + "type": "function", + "function": { + "name": "reload_current_commodity_prices", + "description": "Reloads the current commodity prices from UEX corp.", + "parameters": { + "type": "object", + "properties": {}, + "required": [], + }, + }, + }, + ), + ] + + if self.DEV_MODE: + tools.append( + ( + "show_cached_function_values", + { + "type": "function", + "function": { + "name": "show_cached_function_values", + "description": "Prints the cached function's argument values to the console.", + "parameters": { + "type": "object", + "properties": {}, + "required": [], + }, + }, + }, + ), + ) + + return tools + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + function_response = "" + instant_response = "" + + functions = { + "get_trading_routes": "get_trading_routes", + "get_locations_to_sell_to": "get_locations_to_sell_to", + "get_locations_to_buy_from": "get_locations_to_buy_from", + "get_location_information": "get_location_information", + "get_ship_information": "get_ship_information", + "get_ship_comparison": "get_ship_comparison", + "get_commodity_information": "get_commodity_information", + "get_commodity_prices_and_terminals": "get_commodity_information", + "reload_current_commodity_prices": "reload_current_commodity_prices", + "show_cached_function_values": "show_cached_function_values", + } + + try: + if tool_name in functions: + if not self.skill_loaded: + self.skill_loaded_asked = True + await self._print( + "UEXcorp skill is not loaded yet. Please wait a moment.", + False, + False, + ) + function_response = ( + "Data is still beeing loaded. Please wait a moment." + ) + return function_response, instant_response + + self.start_execution_benchmark() + await self._print(f"Executing function: {tool_name}") + function = getattr(self, "_gpt_call_" + functions[tool_name]) + function_response = await function(**parameters) + if self.settings.debug_mode: + await self.print_execution_time() + if self.DEV_MODE: + await self._print( + f"_gpt_call_{functions[tool_name]} response: {function_response}", + True, + ) + except Exception: + file_object = open(self.logfileerror, "a", encoding="UTF-8") + file_object.write(traceback.format_exc()) + file_object.write( + "========================================================================================\n" + ) + file_object.write( + f"Above error while executing custom function: _gpt_call_{tool_name}\n" + ) + file_object.write(f"With parameters: {parameters}\n") + file_object.write(f"On date: {datetime.now()}\n") + file_object.write(f"Version: {self.skill_version}\n") + file_object.write( + "========================================================================================\n" + ) + file_object.close() + await self._print( + f"Error while executing custom function: {tool_name}\nCheck log file for more details." + ) + function_response = f"Error while executing custom function: {tool_name}" + function_response += "\nTell user there seems to be an error. And you must say that it should be report to the 'uexcorp skill developer (JayMatthew on Discord)'." + + return function_response, instant_response + + async def is_waiting_response_needed(self, tool_name: str) -> bool: + return True + + async def _find_closest_match( + self, search: str | None, lst: list[str] | set[str] + ) -> str | None: + """ + Finds the closest match to a given string in a list. + Or returns an exact match if found. + If it is not an exact match, OpenAI is used to find the closest match. + + Args: + search (str): The search to find a match for. + lst (list): The list of strings to search for a match. + + Returns: + str or None: The closest match found in the list, or None if no match is found. + """ + if search is None or search == "None": + return None + + self._log(f"Searching for closest match to '{search}' in list.", True) + + checksum = f"{hash(frozenset(lst))}-{hash(search)}" + if checksum in self.cache["search_matches"]: + match = self.cache["search_matches"][checksum] + self._log(f"Found closest match to '{search}' in cache: '{match}'", True) + return match + + if search in lst: + self._log(f"Found exact match to '{search}' in list.", True) + return search + + # make a list of possible matches + closest_matches = difflib.get_close_matches(search, lst, n=10, cutoff=0.4) + closest_matches.extend(item for item in lst if search.lower() in item.lower()) + self._log( + f"Making a list for closest matches for search term '{search}': {', '.join(closest_matches)}", + True, + ) + + if not closest_matches: + self._log( + f"No closest match found for '{search}' in list. Returning None.", True + ) + return None + + messages = [ + { + "role": "system", + "content": f""" + I'll give you just a string value. + You will figure out, what value in this list represents this value best: {', '.join(closest_matches)} + Keep in mind that the given string value can be misspelled or has missing words as it has its origin in a speech to text process. + You must only return the value of the closest match to the given value from the defined list, nothing else. + For example if "Hercules A2" is given and the list contains of "A2, C2, M2", you will return "A2" as string. + Or if "C2" is given and the list contains of "A2 Hercules Star Lifter, C2 Monster Truck, M2 Extreme cool ship", you will return "C2 Monster Truck" as string. + On longer search terms, prefer the exact match, if it is in the list. + The response must not contain anything else, than the exact value of the closest match from the list. + If you can't find a match, return 'None'. Do never return the given search value. + """, + }, + { + "role": "user", + "content": search, + }, + ] + completion = await self.llm_call(messages) + answer = ( + completion.choices[0].message.content + if completion and completion.choices + else "" + ) + + if not answer: + dumb_match = difflib.get_close_matches( + search, closest_matches, n=1, cutoff=0.9 + ) + if dumb_match: + self._log( + f"OpenAI did not answer for '{search}'. Returning dumb match '{dumb_match}'", + True, + ) + return dumb_match[0] + else: + self._log( + f"OpenAI did not answer for '{search}' and dumb match not possible. Returning None.", + True, + ) + return None + + self._log(f"OpenAI answered: '{answer}'", True) + + if answer == "None" or answer not in closest_matches: + self._log( + f"No closest match found for '{search}' in list. Returning None.", True + ) + return None + + self._log(f"Found closest match to '{search}' in list: '{answer}'", True) + self.add_context(f"\n\nInstead of '{search}', you should use '{answer}'.") + self.cache["search_matches"][checksum] = answer + return answer + + async def get_prompt(self) -> str | None: + """Return additional context.""" + additional_context = self.config.prompt or "" + additional_context += "\n" + self.dynamic_context + return additional_context + + async def _gpt_call_show_cached_function_values(self) -> str: + """ + Prints the cached function's argument values to the console. + + Returns: + str: A message indicating that the cached function's argument values have been printed to the console. + """ + self._log(self.cache["function_args"], True) + return "The cached function values are: \n" + json.dumps( + self.cache["function_args"] + ) + + async def _gpt_call_reload_current_commodity_prices(self) -> str: + """ + Reloads the current commodity prices from UEX corp. + + Returns: + str: A message indicating that the current commodity prices have been reloaded. + """ + await self._load_data(True) + # clear cached data + for key in self.cache: + self.cache[key] = {} + + self._log("Reloaded current commodity prices from UEX corp.", True) + return "Reloaded current commodity prices from UEX corp." + + async def _gpt_call_get_commodity_information( + self, commodity_name: str = None + ) -> str: + """ + Retrieves information about a given commodity. + + Args: + commodity_name (str, optional): The name of the commodity. Defaults to None. + + Returns: + str: The information about the commodity in JSON format, or an error message if the commodity is not found. + """ + self._log(f"Parameters: Commodity: {commodity_name}", True) + + commodity_name = self._get_function_arg_from_cache( + "commodity_name", commodity_name + ) + + if commodity_name is None: + self._log("No commodity given. Ask for a commodity.", True) + return "No commodity given. Ask for a commodity." + + misunderstood = [] + closest_match = await self._find_closest_match( + commodity_name, self.commodity_names + ) + if closest_match is None: + misunderstood.append(f"Commodity: {commodity_name}") + else: + commodity_name = closest_match + + self._log(f"Interpreted Parameters: Commodity: {commodity_name}", True) + + if misunderstood: + misunderstood_str = ", ".join(misunderstood) + self._log( + f"These given parameters do not exist in game. Exactly ask for clarification of these values: {misunderstood_str}", + True, + ) + return f"These given parameters do not exist in game. Exactly ask for clarification of these values: {misunderstood_str}" + + commodity = self._get_commodity_by_name(commodity_name) + if commodity is not None: + output_commodity = await self._get_converted_commodity_for_output(commodity) + self._log(output_commodity, True) + return json.dumps(output_commodity) + + async def _gpt_call_get_ship_information(self, ship_name: str = None) -> str: + """ + Retrieves information about a specific ship. + + Args: + ship_name (str, optional): The name of the ship. Defaults to None. + + Returns: + str: The ship information or an error message. + + """ + self._log(f"Parameters: Ship: {ship_name}", True) + + ship_name = self._get_function_arg_from_cache("ship_name", ship_name) + + if ship_name is None: + self._log("No ship given. Ask for a ship. Dont say sorry.", True) + return "No ship given. Ask for a ship. Dont say sorry." + + misunderstood = [] + closest_match = await self._find_closest_match(ship_name, self.ship_names) + if closest_match is None: + misunderstood.append(f"Ship: {ship_name}") + else: + ship_name = closest_match + + self._log(f"Interpreted Parameters: Ship: {ship_name}", True) + + if misunderstood: + misunderstood_str = ", ".join(misunderstood) + self._log( + f"These given parameters do not exist in game. Exactly ask for clarification of these values: {misunderstood_str}", + True, + ) + return f"These given parameters do not exist in game. Exactly ask for clarification of these values: {misunderstood_str}" + + ship = self._get_ship_by_name(ship_name) + if ship is not None: + output_ship = f"Summarize in natural language: {(await self._get_converted_ship_for_output(ship))}" + self._log(output_ship, True) + return json.dumps(output_ship) + + async def _gpt_call_get_ship_comparison(self, ship_names: list[str] = None) -> str: + """ + Retrieves information about multiple ships. + + Args: + ship_names (list[str], optional): The names of the ships. Defaults to None. + + Returns: + str: The ship information or an error message. + """ + self._log(f"Parameters: Ships: {', '.join(ship_names)}", True) + + if ship_names is None or not ship_names: + self._log("No ship given. Ask for a ship. Dont say sorry.", True) + return "No ship given. Ask for a ship. Dont say sorry." + + misunderstood = [] + ships = [] + for ship_name in ship_names: + closest_match = await self._find_closest_match(ship_name, self.ship_names) + if closest_match is None: + misunderstood.append(ship_name) + else: + ship_name = closest_match + ships.append(self._get_ship_by_name(ship_name)) + + self._log(f"Interpreted Parameters: Ships: {', '.join(ship_names)}", True) + + if misunderstood: + self._log( + f"These ship names do not exist in game. Exactly ask for clarification of these ships: {', '.join(misunderstood)}", + True, + ) + return f"These ship names do not exist in game. Exactly ask for clarification of these ships: {', '.join(misunderstood)}" + + output = {} + for ship in ships: + + async def get_ship_info(ship): + output[self._format_ship_name(ship)] = ( + await self._get_converted_ship_for_output(ship) + ) + + self.threaded_execution(get_ship_info, ship) + + while len(output) < len(ships): + await asyncio.sleep(0.1) + + output = ( + "Point out differences between these ships in natural language and sentences without just listing differences. Describe them! And dont mention something both cant do:\n" + + json.dumps(output) + ) + self._log(output, True) + return output + + async def _gpt_call_get_location_information( + self, location_name: str = None + ) -> str: + """ + Retrieves information about a given location. + + Args: + location_name (str, optional): The name of the location. Defaults to None. + + Returns: + str: The information about the location in JSON format, or an error message if the location is not found. + """ + self._log(f"Parameters: Location: {location_name}", True) + + location_name = self._get_function_arg_from_cache( + "location_name", location_name + ) + + if location_name is None: + self._log("No location given. Ask for a location.", True) + return "No location given. Ask for a location." + + misunderstood = [] + closest_match = await self._find_closest_match( + location_name, self.location_names_set + ) + if closest_match is None: + misunderstood.append(f"Location: {location_name}") + else: + location_name = closest_match + + self._log(f"Interpreted Parameters: Location: {location_name}", True) + + if misunderstood: + misunderstood_str = ", ".join(misunderstood) + self._log( + f"These given parameters do not exist in game. Exactly ask for clarification of these values: {misunderstood_str}", + True, + ) + return f"These given parameters do not exist in game. Exactly ask for clarification of these values: {misunderstood_str}" + + formating = "Summarize in natural language: " + + # get a clone of the data + terminal = self._get_terminal_by_name(location_name) + if terminal is not None: + output = await self._get_converted_terminal_for_output(terminal) + self._log(output, True) + return formating + json.dumps(output) + city = self._get_city_by_name(location_name) + if city is not None: + output = await self._get_converted_city_for_output(city) + self._log(output, True) + return formating + json.dumps(output) + moon = self._get_moon_by_name(location_name) + if moon is not None: + output = await self._get_converted_moon_for_output(moon) + self._log(output, True) + return formating + json.dumps(output) + planet = self._get_planet_by_name(location_name) + if planet is not None: + output = await self._get_converted_planet_for_output(planet) + self._log(output, True) + return formating + json.dumps(output) + system = self._get_system_by_name(location_name) + if system is not None: + output = await self._get_converted_system_for_output(system) + self._log(output, True) + return formating + json.dumps(output) + + async def _get_converted_terminal_for_output( + self, terminal: dict[str, any], allow_lore: bool = True + ) -> dict[str, any]: + """ + Converts a terminal dictionary to a dictionary that can be used as output. + + Args: + terminal (dict[str, any]): The terminal dictionary to be converted. + + Returns: + dict[str, any]: The converted terminal dictionary. + """ + lore = allow_lore and self.uexcorp_add_lore + checksum = f"terminal--{terminal['id']}--{lore}" + if checksum in self.cache["readable_objects"]: + return self.cache["readable_objects"][checksum] + + output = { + "type": "terminal", + "subtype": terminal["type"], + "name": self._format_terminal_name(terminal, True), + "nickname": self._format_terminal_name(terminal), + "star_system": self._get_system_name_by_code(terminal["id_star_system"]), + "planet": self._get_planet_name_by_code(terminal["id_planet"]), + "city": self._get_city_name_by_code(terminal["id_city"]), + "moon": self._get_moon_name_by_code(terminal["id_moon"]), + } + if terminal["type"] == "commodity": + output["hull_trading"] = ( + "Trading with large ships, that need a loading area, is possible." + if "hull_trading" in terminal and terminal["hull_trading"] + else "Trading with large ships, that need a loading area, is not possible." + ) + + if "vehicle_rental" in terminal: + output["vehicles_for_rental"] = [] + for option in terminal["vehicle_rental"]: + ship = self.ship_code_dict[option["id_vehicle"]] + output["vehicles_for_rental"].append( + f"Rent {self._format_ship_name(ship)} for {option['price_rent']} aUEC." + ) + + if "vehicle_purchase" in terminal: + output["vehicles_for_purchase"] = [] + for option in terminal["vehicle_purchase"]: + ship = self.ship_code_dict[option["id_vehicle"]] + output["vehicles_for_purchase"].append( + f"Buy {self._format_ship_name(ship)} for {option['price_buy']} aUEC." + ) + + if "prices" in terminal: + buyable_commodities = [ + f"{data['name']} for {data['price_buy']} aUEC per SCU" + for commodity_code, data in terminal["prices"].items() + if data["operation"] == "buy" + ] + sellable_commodities = [ + f"{data['name']} for {data['price_sell']} aUEC per SCU" + for commodity_code, data in terminal["prices"].items() + if data["operation"] == "sell" + ] + + if len(buyable_commodities): + output["buyable_commodities"] = ", ".join(buyable_commodities) + if len(sellable_commodities): + output["sellable_commodities"] = ", ".join(sellable_commodities) + + for key in ["system", "planet", "city", "moon"]: + if output.get(key) is None: + output.pop(key, None) + + if lore: + output["background_information"] = await self._fetch_lore( + self._format_terminal_name(terminal) + ) + + self.cache["readable_objects"][checksum] = output + return output + + async def _get_converted_city_for_output( + self, city: dict[str, any] + ) -> dict[str, any]: + """ + Converts a city dictionary to a dictionary that can be used as output. + + Args: + city (dict[str, any]): The city dictionary to be converted. + + Returns: + dict[str, any]: The converted city dictionary. + """ + checksum = f"city--{city['id']}" + if checksum in self.cache["readable_objects"]: + return self.cache["readable_objects"][checksum] + + output = { + "type": "City", + "name": self._format_city_name(city), + "star_system": self._get_system_name_by_code(city["id_star_system"]), + "planet": self._get_planet_name_by_code(city["id_planet"]), + "moon": self._get_moon_name_by_code(city["id_moon"]), + "is_armistice": "Yes" if city["is_armistice"] else "No", + "has_freight_elevator": "Yes" if city["has_freight_elevator"] else "No", + "has_docking_ports": "Yes" if city["has_docking_port"] else "No", + "has_clinic": "Yes" if city["has_clinic"] else "No", + "has_food": "Yes" if city["has_food"] else "No", + "has_refuel_option": "Yes" if city["has_refuel"] else "No", + "has_repair_option": "Yes" if city["has_repair"] else "No", + "has_refinery": "Yes" if city["has_refinery"] else "No", + } + + terminals = self._get_terminals_by_position_name(city["name"]) + if terminals: + output["options_to_trade"] = ", ".join( + [self._format_terminal_name(terminal) for terminal in terminals] + ) + + for key in ["star_system", "planet", "moon"]: + if output.get(key) is None: + output.pop(key, None) + + if self.uexcorp_add_lore: + output["background_information"] = await self._fetch_lore( + self._format_city_name(city) + ) + + self.cache["readable_objects"][checksum] = output + return output + + async def _get_converted_moon_for_output( + self, moon: dict[str, any] + ) -> dict[str, any]: + """ + Converts a moon dictionary to a dictionary that can be used as output. + + Args: + moon (dict[str, any]): The moon dictionary to be converted. + + Returns: + dict[str, any]: The converted moon dictionary. + """ + checksum = f"moon--{moon['id']}" + if checksum in self.cache["readable_objects"]: + return self.cache["readable_objects"][checksum] + + output = { + "type": "Moon", + "name": self._format_moon_name(moon), + "star_system": self._get_system_name_by_code(moon["id_star_system"]), + "orbits_planet": self._get_planet_name_by_code(moon["id_planet"]), + } + + terminals = self._get_terminals_by_position_name(self._format_moon_name(moon)) + if terminals: + output["options_to_trade"] = ", ".join( + [self._format_terminal_name(terminal) for terminal in terminals] + ) + + for key in ["star_system", "orbits_planet"]: + if output.get(key) is None: + output.pop(key, None) + + if self.uexcorp_add_lore: + output["background_information"] = await self._fetch_lore( + self._format_moon_name(moon) + ) + + self.cache["readable_objects"][checksum] = output + return output + + async def _get_converted_planet_for_output( + self, planet: dict[str, any] + ) -> dict[str, any]: + """ + Converts a planet dictionary to a dictionary that can be used as output. + + Args: + planet (dict[str, any]): The planet dictionary to be converted. + + Returns: + dict[str, any]: The converted planet dictionary. + """ + checksum = f"planet--{planet['id']}" + if checksum in self.cache["readable_objects"]: + return self.cache["readable_objects"][checksum] + + output = { + "type": "Planet", + "name": self._format_planet_name(planet), + "star_system": self._get_system_name_by_code(planet["id_star_system"]), + } + + terminals = self._get_terminals_by_position_name(planet["name"]) + if terminals: + output["options_to_trade"] = ", ".join( + [self._format_terminal_name(terminal) for terminal in terminals] + ) + + moons = self._get_moons_by_planetcode(planet["code"]) + if moons: + output["moons"] = ", ".join( + [self._format_moon_name(moon) for moon in moons] + ) + + cities = self._get_cities_by_planetcode(planet["code"]) + if cities: + output["cities"] = ", ".join( + [self._format_city_name(city) for city in cities] + ) + + for key in ["star_system"]: + if output.get(key) is None: + output.pop(key, None) + + if self.uexcorp_add_lore: + output["background_information"] = await self._fetch_lore( + self._format_planet_name(planet) + ) + + self.cache["readable_objects"][checksum] = output + return output + + async def _get_converted_system_for_output( + self, system: dict[str, any] + ) -> dict[str, any]: + """ + Converts a system dictionary to a dictionary that can be used as output. + + Args: + system (dict[str, any]): The system dictionary to be converted. + + Returns: + dict[str, any]: The converted system dictionary. + """ + checksum = f"system--{system['id']}" + if checksum in self.cache["readable_objects"]: + return self.cache["readable_objects"][checksum] + + output = { + "type": "Star System", + "name": self._format_system_name(system), + } + + terminals = self._get_terminals_by_position_name(system["name"]) + if terminals: + output["options_to_trade"] = f"{len(terminals)} different options to trade." + terminal_without_planets = [] + gateways = [] + for terminal in terminals: + if not terminal["id_planet"]: + if terminal["name"].find("Gateway") != -1: + gateways.append(terminal) + else: + terminal_without_planets.append(terminal) + if terminal_without_planets: + output["space_stations"] = ", ".join( + [ + self._format_terminal_name(terminal) + for terminal in terminal_without_planets + ] + ) + if gateways: + output["gateways"] = ", ".join( + [self._format_terminal_name(terminal) for terminal in gateways] + ) + + planets = self._get_planets_by_systemcode(system["code"]) + if planets: + output["planets"] = ", ".join( + [self._format_planet_name(planet) for planet in planets] + ) + + if self.uexcorp_add_lore: + output["background_information"] = await self._fetch_lore( + self._format_system_name(system) + ) + + self.cache["readable_objects"][checksum] = output + return output + + async def _get_converted_ship_for_output( + self, ship: dict[str, any] + ) -> dict[str, any]: + """ + Converts a ship dictionary to a dictionary that can be used as output. + + Args: + ship (dict[str, any]): The ship dictionary to be converted. + + Returns: + dict[str, any]: The converted ship dictionary. + """ + checksum = f"ship--{ship['id']}" + if checksum in self.cache["readable_objects"]: + return self.cache["readable_objects"][checksum] + + output = { + "type": "Ship" if ship["is_spaceship"] else "Groud Vehicle", + "name": self._format_ship_name(ship), + "manufacturer": ship["company_name"], + "cargo_capacity": f"{ship['scu']} SCU", + # "added_on_version": "Unknown" if ship["is_concept"] else ship["game_version"], + "field_of_activity": self._get_ship_field_of_activity(ship), + } + + if not ship["is_concept"]: + if ship["purchase"]: + output["purchase_at"] = [] + for option in ship["purchase"]: + terminal = self.terminal_code_dict[option["id_terminal"]] + output["purchase_at"].append( + f"Buy at {self._format_terminal_name(terminal)} for {option['price_buy']} aUEC" + ) + else: + output["purchase_at"] = "Not available for purchase." + if ship["rental"]: + output["rent_at"] = [] + for option in ship["rental"]: + terminal = self.terminal_code_dict[option["id_terminal"]] + output["rent_at"].append( + f"Rent at {self._format_terminal_name(terminal)} for {option['price_rent']} aUEC per day." + ) + else: + output["rent_at"] = "Not available as rental." + + if ship["hull_trading"] is True: + output["trading_info"] = ( + "This ship can only trade on suitable space stations with cargo loading option." + ) + + if self.uexcorp_add_lore: + output["additional_info"] = await self._fetch_lore( + self._format_ship_name(ship, False) + ) + + self.cache["readable_objects"][checksum] = output + return output + + def _get_ship_field_of_activity(self, ship: dict[str, any]) -> str: + """ + Returns the field of activity of a ship. + + Args: + ship (dict[str, any]): The ship dictionary to get the field of activity for. + + Returns: + str: The field of activity of the ship. + """ + + field = [] + if ship["is_exploration"]: + field.append("Exploration") + if ship["is_mining"]: + field.append("Mining") + if ship["is_salvage"]: + field.append("Salvage") + if ship["is_refinery"]: + field.append("Refinery") + if ship["is_scanning"]: + field.append("Scanning") + if ship["is_cargo"]: + field.append("Cargo") + if ship["is_medical"]: + field.append("Medical") + if ship["is_racing"]: + field.append("Racing") + if ship["is_repair"]: + field.append("Repair") + if ship["is_refuel"]: + field.append("Refuel") + if ship["is_interdiction"]: + field.append("Interdiction") + if ship["is_tractor_beam"]: + field.append("Tractor Beam") + if ship["is_qed"]: + field.append("Quantum Interdiction") + if ship["is_emp"]: + field.append("EMP") + if ship["is_construction"]: + field.append("Construction") + if ship["is_datarunner"]: + field.append("Datarunner") + if ship["is_science"]: + field.append("Science") + if ship["is_boarding"]: + field.append("Boarding") + if ship["is_stealth"]: + field.append("Stealth") + if ship["is_research"]: + field.append("Research") + if ship["is_carrier"]: + field.append("Carrier") + + addition = [] + if ship["is_civilian"]: + addition.append("Civilian") + if ship["is_military"]: + addition.append("Military") + + return f"{', '.join(field)} ({' & '.join(addition)})" + + async def _get_converted_commodity_for_output( + self, commodity: dict[str, any] + ) -> dict[str, any]: + """ + Converts a commodity dictionary to a dictionary that can be used as output. + + Args: + commodity (dict[str, any]): The commodity dictionary to be converted. + + Returns: + dict[str, any]: The converted commodity dictionary. + """ + checksum = f"commodity--{commodity['id']}" + if checksum in self.cache["readable_objects"]: + return self.cache["readable_objects"][checksum] + + output = { + "type": "Commodity", + "subtype": commodity["kind"], + "name": commodity["name"], + } + + price_buy_best = None + price_sell_best = None + output["buy_at"] = {} + output["sell_at"] = {} + + for terminal in self.terminals: + if "prices" not in terminal: + continue + if commodity["id"] in terminal["prices"]: + if terminal["prices"][commodity["id"]]["operation"] == "buy": + price_buy = terminal["prices"][commodity["id"]]["price_buy"] + if price_buy_best is None or price_buy < price_buy_best: + price_buy_best = price_buy + output["buy_at"][ + self._format_terminal_name(terminal) + ] = f"{price_buy} aUEC" + else: + price_sell = terminal["prices"][commodity["id"]]["price_sell"] + if price_sell_best is None or price_sell > price_sell_best: + price_sell_best = price_sell + output["sell_at"][ + self._format_terminal_name(terminal) + ] = f"{price_sell} aUEC" + + output["best_buy_price"] = ( + f"{price_buy_best} aUEC" if price_buy_best else "Not buyable." + ) + output["best_sell_price"] = ( + f"{price_sell_best} aUEC" if price_sell_best else "Not sellable." + ) + + boolean_keys = ["is_harvestable", "is_mineral", "is_illegal"] + for key in boolean_keys: + output[key] = "Yes" if commodity[key] else "No" + + if commodity["is_illegal"]: + output["notes"] = ( + "Stay away from ship scanns to avoid fines and crimestat, as this commodity is illegal." + ) + + if self.uexcorp_add_lore: + output["additional_info"] = await self._fetch_lore(commodity["name"]) + + self.cache["readable_objects"][checksum] = output + return output + + async def _gpt_call_get_locations_to_sell_to( + self, + commodity_name: str = None, + ship_name: str = None, + position_name: str = None, + commodity_amount: int = 1, + maximal_number_of_locations: int = 5, + ) -> str: + await self._print( + f"Given Parameters: Commodity: {commodity_name}, Ship Name: {ship_name}, Current Position: {position_name}, Amount: {commodity_amount}, Maximal Number of Locations: {maximal_number_of_locations}", + True, + ) + + commodity_name = self._get_function_arg_from_cache( + "commodity_name", commodity_name + ) + ship_name = self._get_function_arg_from_cache("ship_name", ship_name) + + if commodity_name is None: + self._log("No commodity given. Ask for a commodity.", True) + return "No commodity given. Ask for a commodity." + + misunderstood = [] + parameters = { + "commodity_name": (commodity_name, self.commodity_names), + "ship_name": (ship_name, self.ship_names), + "position_name": (position_name, self.location_names_set_trading), + } + for param, (value, names_set) in parameters.items(): + if value is not None: + match = await self._find_closest_match(value, names_set) + if match is None: + misunderstood.append(f"{param}: {value}") + else: + self._set_function_arg_to_cache(param, match) + parameters[param] = (match, names_set) + commodity_name = parameters["commodity_name"][0] + ship_name = parameters["ship_name"][0] + position_name = parameters["position_name"][0] + + await self._print( + f"Interpreted Parameters: Commodity: {commodity_name}, Ship Name: {ship_name}, Position: {position_name}, Amount: {commodity_amount}, Maximal Number of Locations: {maximal_number_of_locations}", + True, + ) + + if misunderstood: + self._log( + "These given parameters do not exist in game. Exactly ask for clarification of these values: " + + ", ".join(misunderstood), + True, + ) + return ( + "These given parameters do not exist in game. Exactly ask for clarification of these values: " + + ", ".join(misunderstood) + ) + + terminals = ( + self.terminals + if position_name is None + else self._get_terminals_by_position_name(position_name) + ) + commodity = self._get_commodity_by_name(commodity_name) + ship = self._get_ship_by_name(ship_name) + amount = max(1, int(commodity_amount or 1)) + maximal_number_of_locations = max(1, int(maximal_number_of_locations or 3)) + + selloptions = collections.defaultdict(list) + for terminal in terminals: + sellprice = self._get_data_location_sellprice( + terminal, commodity, ship, amount + ) + if sellprice is not None: + selloptions[sellprice].append(terminal) + + selloptions = dict(sorted(selloptions.items(), reverse=True)) + selloptions = dict( + itertools.islice(selloptions.items(), maximal_number_of_locations) + ) + + messages = [ + f"Here are the best {len(selloptions)} locations to sell {amount} SCU {commodity_name}:" + ] + + for sellprice, terminals in selloptions.items(): + messages.append(f"{sellprice} aUEC:") + for terminal in terminals: + messages.append(await self._get_terminal_route_description(terminal)) + messages.append("\n") + + self._log("\n".join(messages), True) + return "\n".join(messages) + + async def _gpt_call_get_locations_to_buy_from( + self, + commodity_name: str = None, + ship_name: str = None, + position_name: str = None, + commodity_amount: int = 1, + maximal_number_of_locations: int = 5, + ) -> str: + await self._print( + f"Given Parameters: Commodity: {commodity_name}, Ship Name: {ship_name}, Current Position: {position_name}, Amount: {commodity_amount}, Maximal Number of Locations: {maximal_number_of_locations}", + True, + ) + + commodity_name = self._get_function_arg_from_cache( + "commodity_name", commodity_name + ) + ship_name = self._get_function_arg_from_cache("ship_name", ship_name) + + if commodity_name is None: + self._log("No commodity given. Ask for a commodity.", True) + return "No commodity given. Ask for a commodity." + + misunderstood = [] + parameters = { + "ship_name": (ship_name, self.ship_names), + "location_name": (position_name, self.location_names_set_trading), + "commodity_name": (commodity_name, self.commodity_names), + } + for param, (value, names_set) in parameters.items(): + if value is not None: + match = await self._find_closest_match(value, names_set) + if match is None: + misunderstood.append(f"{param}: {value}") + else: + self._set_function_arg_to_cache(param, match) + parameters[param] = (match, names_set) + ship_name = parameters["ship_name"][0] + position_name = parameters["location_name"][0] + commodity_name = parameters["commodity_name"][0] + + await self._print( + f"Interpreted Parameters: Commodity: {commodity_name}, Ship Name: {ship_name}, Position: {position_name}, Amount: {commodity_amount}, Maximal Number of Locations: {maximal_number_of_locations}", + True, + ) + + if misunderstood: + self._log( + "These given parameters do not exist in game. Exactly ask for clarification of these values: " + + ", ".join(misunderstood), + True, + ) + return ( + "These given parameters do not exist in game. Exactly ask for clarification of these values: " + + ", ".join(misunderstood) + ) + + terminals = ( + self.terminals + if position_name is None + else self._get_terminals_by_position_name(position_name) + ) + commodity = self._get_commodity_by_name(commodity_name) + ship = self._get_ship_by_name(ship_name) + amount = max(1, int(commodity_amount or 1)) + maximal_number_of_locations = max(1, int(maximal_number_of_locations or 3)) + + buyoptions = collections.defaultdict(list) + for terminal in terminals: + buyprice = self._get_data_location_buyprice( + terminal, commodity, ship, amount + ) + if buyprice is not None: + buyoptions[buyprice].append(terminal) + + buyoptions = dict(sorted(buyoptions.items(), reverse=False)) + buyoptions = dict( + itertools.islice(buyoptions.items(), maximal_number_of_locations) + ) + + messages = [ + f"Here are the best {len(buyoptions)} locations to buy {amount} SCU {commodity_name}:" + ] + for buyprice, terminals in buyoptions.items(): + messages.append(f"{buyprice} aUEC:") + messages.append(await self._get_terminal_route_description(terminal)) + messages.append("\n") + + self._log("\n".join(messages), True) + return "\n".join(messages) + + def _get_data_location_sellprice(self, terminal, commodity, ship=None, amount=1): + if ( + ship is not None + and ship["hull_trading"] is True + and terminal["hull_trading"] is False + ): + return None + + if "prices" not in terminal: + return None + + commodity_code = commodity["id"] + for code, price in terminal["prices"].items(): + if code == commodity_code and price["operation"] == "sell": + return price["price_sell"] * amount + return None + + def _get_data_location_buyprice(self, terminal, commodity, ship=None, amount=1): + if ( + ship is not None + and ship["hull_trading"] is True + and terminal["hull_trading"] is False + ): + return None + + if "prices" not in terminal: + return None + + commodity_code = commodity["id"] + for code, price in terminal["prices"].items(): + if code == commodity_code and price["operation"] == "buy": + return price["price_buy"] * amount + return None + + async def _gpt_call_get_trading_routes( + self, + ship_name: str = None, + money_to_spend: float = None, + position_start_name: str = None, + free_cargo_space: float = None, + position_end_name: str = None, + commodity_name: str = None, + illegal_commodities_allowed: bool = None, + maximal_number_of_routes: int = None, + ) -> str: + """ + Finds multiple best trading routes based on the given parameters. + + Args: + ship_name (str, optional): The name of the ship. Defaults to None. + money_to_spend (float, optional): The amount of money to spend. Defaults to None. + position_start_name (str, optional): The name of the starting position. Defaults to None. + free_cargo_space (float, optional): The amount of free cargo space. Defaults to None. + position_end_name (str, optional): The name of the ending position. Defaults to None. + commodity_name (str, optional): The name of the commodity. Defaults to None. + illegal_commodities_allowed (bool, optional): Flag indicating whether illegal commodities are allowed. Defaults to True. + maximal_number_of_routes (int, optional): The maximum number of routes to return. Defaults to 2. + + Returns: + str: A string representation of the trading routes found. + """ + + # For later use in distance calculation: + # https://starmap.tk/api/v2/oc/ + # https://starmap.tk/api/v2/pois/ + + await self._print( + f"Parameters: Ship: {ship_name}, Position Start: {position_start_name}, Position End: {position_end_name}, Commodity Name: {commodity_name}, Money: {money_to_spend} aUEC, free_cargo_space: {free_cargo_space} SCU, Maximal Number of Routes: {maximal_number_of_routes}, Illegal Allowed: {illegal_commodities_allowed}", + True, + ) + + ship_name = self._get_function_arg_from_cache("ship_name", ship_name) + illegal_commodities_allowed = self._get_function_arg_from_cache( + "illegal_commodities_allowed", illegal_commodities_allowed + ) + if illegal_commodities_allowed is None: + illegal_commodities_allowed = True + + missing_args = [] + if ship_name is None: + missing_args.append("ship_name") + + if self.uexcorp_tradestart_mandatory and position_start_name is None: + missing_args.append("position_start_name") + + money_to_spend = ( + None + if money_to_spend is not None and int(money_to_spend) < 1 + else money_to_spend + ) + free_cargo_space = ( + None + if free_cargo_space is not None and int(free_cargo_space) < 1 + else free_cargo_space + ) + + misunderstood = [] + parameters = { + "ship_name": (ship_name, self.ship_names), + "position_start_name": ( + position_start_name, + self.location_names_set_trading, + ), + "position_end_name": (position_end_name, self.location_names_set_trading), + "commodity_name": (commodity_name, self.commodity_names), + } + for param, (value, names_set) in parameters.items(): + if value is not None: + match = await self._find_closest_match(value, names_set) + if match is None: + misunderstood.append(f"{param}: {value}") + else: + self._set_function_arg_to_cache(param, match) + parameters[param] = (match, names_set) + ship_name = parameters["ship_name"][0] + position_start_name = parameters["position_start_name"][0] + position_end_name = parameters["position_end_name"][0] + commodity_name = parameters["commodity_name"][0] + + if money_to_spend is not None: + self._set_function_arg_to_cache("money", money_to_spend) + + await self._print( + f"Interpreted Parameters: Ship: {ship_name}, Position Start: {position_start_name}, Position End: {position_end_name}, Commodity Name: {commodity_name}, Money: {money_to_spend} aUEC, free_cargo_space: {free_cargo_space} SCU, Maximal Number of Routes: {maximal_number_of_routes}, Illegal Allowed: {illegal_commodities_allowed}", + True, + ) + + self._set_function_arg_to_cache("money", money_to_spend) + + if misunderstood or missing_args: + misunderstood_str = ", ".join(misunderstood) + missing_str = ", ".join(missing_args) + answer = "" + if missing_str: + answer += f"Missing parameters: {missing_str}. " + if misunderstood_str: + answer += ( + f"These given parameters were misunderstood: {misunderstood_str}" + ) + return answer + + # set variables + ship = self._get_ship_by_name(ship_name) + if money_to_spend is not None: + money = int(money_to_spend) + else: + money = None + if free_cargo_space is not None: + free_cargo_space = int(free_cargo_space) + else: + free_cargo_space = None + commodity = ( + self._get_commodity_by_name(commodity_name) if commodity_name else None + ) + maximal_number_of_routes = int( + maximal_number_of_routes or self.uexcorp_default_trade_route_count + ) + start_terminals = ( + self._get_terminals_by_position_name(position_start_name) + if position_start_name + else self.terminals + ) + end_terminals = ( + self._get_terminals_by_position_name(position_end_name) + if position_end_name + else self.terminals + ) + + commodities = [] + if commodity is None: + commodities = self.commodities + else: + commodities.append(commodity) + + trading_routes = [] + errors = [] + for commodity in commodities: + commodity_routes = [] + if not illegal_commodities_allowed and commodity["is_illegal"]: + continue + for start_terminal in start_terminals: + if ( + "prices" not in start_terminal + or commodity["id"] not in start_terminal["prices"] + or start_terminal["prices"][commodity["id"]]["operation"] != "buy" + ): + continue + for end_terminal in end_terminals: + if ( + "prices" not in end_terminal + or commodity["id"] not in end_terminal["prices"] + or end_terminal["prices"][commodity["id"]]["operation"] + != "sell" + ): + continue + + if ( + ship + and ship["hull_trading"] is True + and ( + "hull_trading" not in start_terminal + or start_terminal["hull_trading"] is not True + or "hull_trading" not in end_terminal + or end_terminal["hull_trading"] is not True + ) + ): + continue + + trading_route_new = self._get_trading_route( + ship, + start_terminal, + end_terminal, + money, + free_cargo_space, + commodity, + illegal_commodities_allowed, + ) + + if isinstance(trading_route_new, str): + if trading_route_new not in errors: + errors.append(trading_route_new) + else: + commodity_routes.append(trading_route_new) + + if len(commodity_routes) > 0: + if self.uexcorp_summarize_routes_by_commodity: + best_commodity_routes = heapq.nlargest( + 1, commodity_routes, key=lambda k: int(k["profit"]) + ) + trading_routes.extend(best_commodity_routes) + else: + trading_routes.extend(commodity_routes) + + if len(trading_routes) > 0: + additional_answer = "" + if len(trading_routes) < maximal_number_of_routes: + additional_answer += ( + f" There are only {len(trading_routes)} routes available." + ) + else: + additional_answer += f" There are {len(trading_routes)} routes available and these are the best {maximal_number_of_routes} ones." + + # sort trading routes by profit and limit to maximal_number_of_routes + trading_routes = heapq.nlargest( + maximal_number_of_routes, trading_routes, key=lambda k: int(k["profit"]) + ) + + for trading_route in trading_routes: + destinationselection = [] + for terminal in trading_route["end"]: + destinationselection.append( + f"{(await self._get_terminal_route_description(terminal))}" + ) + trading_route["end"] = " OR ".join(destinationselection) + startselection = [] + for terminal in trading_route["start"]: + startselection.append( + f"{(await self._get_terminal_route_description(terminal))}" + ) + trading_route["start"] = " OR ".join(startselection) + + # format the trading routes + for trading_route in trading_routes: + trading_route["start"] = trading_route["start"] + trading_route["end"] = trading_route["end"] + trading_route["commodity"] = self._format_commodity_name( + trading_route["commodity"] + ) + trading_route["profit"] = f"{trading_route['profit']} aUEC" + trading_route["buy"] = f"{trading_route['buy']} aUEC" + trading_route["sell"] = f"{trading_route['sell']} aUEC" + trading_route["cargo"] = f"{trading_route['cargo']} SCU" + trading_route["additional_info"] = trading_route["additional_info"] + + message = ( + "Possible commodities with their profit. Just give basic overview at first.\n" + + additional_answer + + " JSON: \n " + + json.dumps(trading_routes) + ) + return message + else: + return_string = "No trading routes found." + if len(errors) > 0: + return_string += "\nPossible errors are:\n- " + "\n- ".join(errors) + return return_string + + def _get_trading_route( + self, + ship: dict[str, any], + position_start: dict[str, any], + position_end: dict[str, any], + money: int = None, + free_cargo_space: int = None, + commodity: dict[str, any] = None, + illegal_commodities_allowed: bool = True, + ) -> str: + """ + Finds the best trading route based on the given parameters. + + Args: + ship (dict[str, any]): The ship dictionary. + position_start (dict[str, any]): The starting position dictionary. + money (int, optional): The amount of money to spend. Defaults to None. + free_cargo_space (int, optional): The amount of free cargo space. Defaults to None. + position_end (dict[str, any], optional): The ending position dictionary. Defaults to None. + commodity (dict[str, any], optional): The commodity dictionary. Defaults to None. + illegal_commodities_allowed (bool, optional): Flag indicating whether illegal commodities are allowed. Defaults to True. + + Returns: + str: A string representation of the trading route found. JSON if the route is found, otherwise an error message. + """ + + # set variables + cargo_space = ship["scu"] + if free_cargo_space: + cargo_space = free_cargo_space + if free_cargo_space > ship["scu"]: + cargo_space = ship["scu"] + + if cargo_space < 1: + return "Your ship has no cargo space to trade." + + commodity_filter = commodity + start_terminals = [position_start] + if ship["hull_trading"] is True: + start_terminals = [ + terminal + for terminal in start_terminals + if "hull_trading" in terminal and terminal["hull_trading"] is True + ] + if len(start_terminals) < 1: + if ship["hull_trading"] is True: + return "No valid start position given. Make sure to provide a start point compatible with your ship." + return "No valid start position given. Try a different position or just name a planet or star system." + + end_terminals = [position_end] + if ship["hull_trading"] is True: + end_terminals = [ + terminal + for terminal in end_terminals + if "hull_trading" in terminal and terminal["hull_trading"] is True + ] + if len(end_terminals) < 1: + return "No valid end position given." + + if ( + len(end_terminals) == 1 + and len(start_terminals) == 1 + and end_terminals[0]["id"] == start_terminals[0]["id"] + ): + return "Start and end position are the same." + + if money is not None and money <= 0: + return "You dont have enough money to trade." + + best_route = { + "start": [], + "end": [], + "commodity": {}, + "profit": 0, + "cargo": 0, + "buy": 0, + "sell": 0, + "additional_info": "", + } + + # apply trade port blacklist + if self.uexcorp_trade_blacklist: + for blacklist_item in self.uexcorp_trade_blacklist: + if "tradeport" in blacklist_item and blacklist_item["tradeport"]: + for terminal in start_terminals: + if ( + self._format_terminal_name(terminal) + == blacklist_item["tradeport"] + ): + if ( + "commodity" not in blacklist_item + or not blacklist_item["commodity"] + ): + # remove terminal, if no commodity given + start_terminals.remove(terminal) + break + else: + commodity = self._get_commodity_by_name( + blacklist_item["commodity"] + ) + for commodity_code, data in terminal["prices"].items(): + if commodity["id"] == commodity_code: + # remove commodity code from terminal + terminal["prices"].pop(commodity_code) + break + for terminal in end_terminals: + if ( + self._format_terminal_name(terminal) + == blacklist_item["tradeport"] + ): + if ( + "commodity" not in blacklist_item + or not blacklist_item["commodity"] + ): + # remove terminal, if no commodity given + end_terminals.remove(terminal) + break + else: + commodity = self._get_commodity_by_name( + blacklist_item["commodity"] + ) + for commodity_code, data in terminal["prices"].items(): + if commodity["id"] == commodity_code: + # remove commodity code from terminal + terminal["prices"].pop(commodity_code) + break + + if len(start_terminals) < 1 or len(end_terminals) < 1: + return "Exluded by blacklist." + + for terminal_start in start_terminals: + commodities = [] + if "prices" not in terminal_start: + continue + + for commodity_code, price in terminal_start["prices"].items(): + if price["operation"] == "buy" and ( + commodity_filter is None or commodity_filter["id"] == commodity_code + ): + commodity = self._get_commodity_by_code(commodity_code) + if ( + illegal_commodities_allowed is True + or not commodity["is_illegal"] + ): + temp_price = price + temp_price["commodity_code"] = commodity_code + + in_blacklist = False + # apply commodity blacklist + if self.uexcorp_trade_blacklist: + for blacklist_item in self.uexcorp_trade_blacklist: + if ( + "commodity" in blacklist_item + and blacklist_item["commodity"] + and not "tradeport" in blacklist_item + or not blacklist_item["tradeport"] + ): + if commodity["name"] == blacklist_item["commodity"]: + # remove commodity code from terminal + in_blacklist = True + break + + if not in_blacklist: + commodities.append(price) + + if len(commodities) < 1: + continue + + for terminal_end in end_terminals: + if "prices" not in terminal_end: + continue + + for commodity_code, price in terminal_end["prices"].items(): + sell_commodity = self._get_commodity_by_code(commodity_code) + + in_blacklist = False + # apply commodity blacklist + if sell_commodity and self.uexcorp_trade_blacklist: + for blacklist_item in self.uexcorp_trade_blacklist: + if ( + "commodity" in blacklist_item + and blacklist_item["commodity"] + and not "tradeport" in blacklist_item + or not blacklist_item["tradeport"] + ): + if ( + sell_commodity["name"] + == blacklist_item["commodity"] + ): + # remove commodity code from terminal + in_blacklist = True + break + + if in_blacklist: + continue + + temp_price = price + temp_price["commodity_code"] = commodity_code + + for commodity in commodities: + if ( + commodity["commodity_code"] == temp_price["commodity_code"] + and price["operation"] == "sell" + and price["price_sell"] > commodity["price_buy"] + ): + if money is None: + cargo_by_money = cargo_space + else: + cargo_by_money = math.floor( + money / commodity["price_buy"] + ) + cargo_by_space = cargo_space + if self.uexcorp_use_estimated_availability: + cargo_by_availability_sell = ( + temp_price["scu_expected"] or 0 + ) + cargo_by_availability_buy = ( + commodity["scu_expected"] or 0 + ) + else: + cargo_by_availability_sell = cargo_by_space + cargo_by_availability_buy = cargo_by_space + + cargo = min( + cargo_by_money, + cargo_by_space, + cargo_by_availability_sell, + cargo_by_availability_buy, + ) + if cargo >= 1: + info = "" + if min(cargo_by_money, cargo_by_space) > min( + cargo_by_availability_buy, + cargo_by_availability_sell, + ): + if ( + cargo_by_availability_buy + < cargo_by_availability_sell + ): + info = f"Please mention to user: SCU count limited to {min(cargo_by_availability_buy, cargo_by_availability_sell)} (instead of {min(cargo_by_money, cargo_by_space)}) by estimated availability at buy location." + elif ( + cargo_by_availability_buy + > cargo_by_availability_sell + ): + info = f"Please mention to user: SCU count limited to {min(cargo_by_availability_buy, cargo_by_availability_sell)} (instead of {min(cargo_by_money, cargo_by_space)}) by estimated availability at sell location." + else: + info = f"Please mention to user: SCU count limited to {min(cargo_by_availability_buy, cargo_by_availability_sell)} (instead of {min(cargo_by_money, cargo_by_space)}) by estimated availability at sell and buy location." + profit = round( + cargo + * (price["price_sell"] - commodity["price_buy"]) + ) + if profit > best_route["profit"]: + best_route["start"] = [terminal_start] + best_route["end"] = [terminal_end] + best_route["commodity"] = temp_price + best_route["profit"] = profit + best_route["cargo"] = cargo + best_route["buy"] = commodity["price_buy"] * cargo + best_route["sell"] = price["price_sell"] * cargo + best_route["additional_info"] = info + else: + if ( + profit == best_route["profit"] + and best_route["commodity"]["commodity_code"] + == temp_price["commodity_code"] + ): + if terminal_start not in best_route["start"]: + best_route["start"].append(terminal_start) + if terminal_end not in best_route["end"]: + best_route["end"].append(terminal_end) + + if len(best_route["start"]) == 0: + return f"No route found for your {ship['name']}. Try a different route." + + best_route["commodity"] = best_route["commodity"] + best_route["profit"] = f"{best_route['profit']}" + best_route["cargo"] = f"{best_route['cargo']}" + best_route["buy"] = f"{best_route['buy']}" + best_route["sell"] = f"{best_route['sell']}" + + return best_route + + def _get_ship_by_name(self, name: str) -> dict[str, any] | None: + """Finds the ship with the specified name and returns the ship or None. + + Args: + name (str): The name of the ship to search for. + + Returns: + Optional[object]: The ship object if found, or None if not found. + """ + return self.ship_dict.get(name.lower()) if name else None + + def _get_terminal_by_name(self, name: str) -> dict[str, any] | None: + """Finds the terminal with the specified name and returns the terminal or None. + + Args: + name (str): The name of the terminal to search for. + + Returns: + Optional[object]: The terminal object if found, otherwise None. + """ + return self.terminal_dict.get(name.lower()) if name else None + + def _get_terminal_by_code(self, code: str) -> dict[str, any] | None: + """Finds the terminal with the specified code and returns the terminal or None. + + Args: + code (str): The code of the terminal to search for. + + Returns: + Optional[object]: The terminal object if found, otherwise None. + """ + return self.terminal_code_dict.get(code) if code else None + + def _get_planet_by_name(self, name: str) -> dict[str, any] | None: + """Finds the planet with the specified name and returns the planet or None. + + Args: + name (str): The name of the planet to search for. + + Returns: + Optional[object]: The planet object if found, otherwise None. + """ + return self.planet_dict.get(name.lower()) if name else None + + def _get_city_by_name(self, name: str) -> dict[str, any] | None: + """Finds the city with the specified name and returns the city or None. + + Args: + name (str): The name of the city to search for. + + Returns: + Optional[object]: The city object if found, or None if not found. + """ + return self.city_dict.get(name.lower()) if name else None + + def _get_moon_by_name(self, name: str) -> dict[str, any] | None: + """Finds the moon with the specified name and returns the moon or None. + + Args: + name (str): The name of the moon to search for. + + Returns: + Optional[object]: The moon object if found, otherwise None. + """ + return self.moon_dict.get(name.lower()) if name else None + + def _get_system_by_name(self, name: str) -> dict[str, any] | None: + """Finds the system with the specified name and returns the system or None. + + Args: + name (str): The name of the system to search for. + + Returns: + Optional[object]: The system object if found, otherwise None. + """ + return self.system_dict.get(name.lower()) if name else None + + def _get_commodity_by_name(self, name: str) -> dict[str, any] | None: + """Finds the commodity with the specified name and returns the commodity or None. + + Args: + name (str): The name of the commodity to search for. + + Returns: + Optional[object]: The commodity object if found, otherwise None. + """ + return self.commodity_dict.get(name.lower()) if name else None + + async def _get_terminal_route_description(self, terminal: dict[str, any]) -> str: + """Returns the breadcrums of a terminal. + + Args: + terminal (dict[str, any]): The terminal information. + + Returns: + str: The description of the terminal route. + """ + terminal = await self._get_converted_terminal_for_output(terminal, False) + keys = [ + ("star_system", "Star-System"), + ("planet", "Planet"), + ("moon", "moon"), + ("city", "City"), + ("name", "Trade Point"), + ] + route = [f"{name}: {terminal[key]}" for key, name in keys if key in terminal] + return f"({' >> '.join(route)})" + + def _get_system_name_by_code(self, code: str) -> str: + """Returns the name of the system with the specified code. + + Args: + code (str): The code of the system. + + Returns: + str: The name of the system with the specified code. + """ + return ( + self._format_system_name(self.system_code_dict.get(code)) if code else None + ) + + def _get_planet_name_by_code(self, code: str) -> str: + """Returns the name of the planet with the specified code. + + Args: + code (str): The code of the planet. + + Returns: + str: The name of the planet with the specified code. + """ + return ( + self._format_planet_name(self.planet_code_dict.get(code)) if code else None + ) + + def _get_moon_name_by_code(self, code: str) -> str: + """Returns the name of the moon with the specified code. + + Args: + code (str): The code of the moon. + + Returns: + str: The name of the moon with the specified code. + """ + return self._format_moon_name(self.moon_code_dict.get(code)) if code else None + + def _get_city_name_by_code(self, code: str) -> str: + """Returns the name of the city with the specified code. + + Args: + code (str): The code of the city. + + Returns: + str: The name of the city with the specified code. + """ + return self._format_city_name(self.city_code_dict.get(code)) if code else None + + def _get_commodity_name_by_code(self, code: str) -> str: + """Returns the name of the commodity with the specified code. + + Args: + code (str): The code of the commodity. + + Returns: + str: The name of the commodity with the specified code. + """ + return ( + self._format_commodity_name(self.commodity_code_dict.get(code)) + if code + else None + ) + + def _get_commodity_by_code(self, code: str) -> dict[str, any] | None: + """Finds the commodity with the specified code and returns the commodity or None. + + Args: + code (str): The code of the commodity to search for. + + Returns: + Optional[object]: The commodity object if found, otherwise None. + """ + return self.commodity_code_dict.get(code) if code else None + + def _get_terminals_by_position_name(self, name: str) -> list[dict[str, any]]: + """Returns all terminals with the specified position name. + + Args: + name (str): The position name to search for. + + Returns: + list[dict[str, any]]: A list of terminals matching the position name. + """ + if not name: + return [] + + terminals = [] + + terminal_temp = self._get_terminal_by_name(name) + if terminal_temp: + terminals.append(terminal_temp) + + terminals.extend(self._get_terminals_by_systemname(name)) + terminals.extend(self._get_terminals_by_planetname(name)) + terminals.extend(self._get_terminals_by_moonname(name)) + terminals.extend(self._get_terminals_by_cityname(name)) + return terminals + + def _get_moons_by_planetcode(self, code: str) -> list[dict[str, any]]: + """Returns the moon with the specified planet code. + + Args: + code (str): The code of the planet. + + Returns: + Optional[object]: The moon object if found, otherwise None. + """ + return self.moons_by_planet.get(code, []) if code else [] + + def _get_cities_by_planetcode(self, code: str) -> list[dict[str, any]]: + """Returns all cities with the specified planet code. + + Args: + code (str): The code of the planet. + + Returns: + list[dict[str, any]]: A list of cities matching the planet code. + """ + return self.cities_by_planet.get(code, []) if code else [] + + def _get_planets_by_systemcode(self, code: str) -> list[dict[str, any]]: + """Returns all planets with the specified system code. + + Args: + code (str): The code of the system. + + Returns: + list[dict[str, any]]: A list of planets matching the system code. + """ + return self.planets_by_system.get(code, []) if code else [] + + def _get_terminals_by_systemcode(self, code: str) -> list[dict[str, any]]: + """Returns all terminals with the specified system code. + + Args: + code (str): The code of the system. + + Returns: + list[dict[str, any]]: A list of terminals matching the system code. + """ + return self.terminals_by_system.get(code, []) if code else [] + + def _get_terminals_by_planetcode(self, code: str) -> list[dict[str, any]]: + """Returns all terminals with the specified planet code. + + Args: + code (str): The code of the planet. + + Returns: + list[dict[str, any]]: A list of terminals matching the planet code. + """ + return self.terminals_by_planet.get(code, []) if code else [] + + def _get_terminals_by_mooncode(self, code: str) -> list[dict[str, any]]: + """Returns all terminals with the specified moon code. + + Args: + code (str): The code of the moon. + + Returns: + list[dict[str, any]]: A list of terminals matching the moon code. + """ + return self.terminals_by_moon.get(code, []) if code else [] + + def _get_terminals_by_citycode(self, code: str) -> list[dict[str, any]]: + """Returns all terminals with the specified city code. + + Args: + code (str): The code of the city. + + Returns: + list[dict[str, any]]: A list of terminals matching the city code. + """ + return self.terminals_by_city.get(code, []) if code else [] + + def _get_terminals_by_planetname(self, name: str) -> list[dict[str, any]]: + """Returns all terminals with the specified planet name. + + Args: + name (str): The name of the planet. + + Returns: + list[dict[str, any]]: A list of terminals matching the planet name. + """ + planet = self._get_planet_by_name(name) + return self._get_terminals_by_planetcode(planet["id"]) if planet else [] + + def _get_terminals_by_moonname(self, name: str) -> list[dict[str, any]]: + """Returns all terminals with the specified moon name. + + Args: + name (str): The name of the moon. + + Returns: + list[dict[str, any]]: A list of terminals matching the moon name. + """ + moon = self._get_moon_by_name(name) + return self._get_terminals_by_mooncode(moon["id"]) if moon else [] + + def _get_terminals_by_cityname(self, name: str) -> list[dict[str, any]]: + """Returns all terminals with the specified city name. + + Args: + name (str): The name of the city. + + Returns: + list[dict[str, any]]: A list of terminals matching the city name. + """ + city = self._get_city_by_name(name) + return self._get_terminals_by_citycode(city["id"]) if city else [] + + def _get_terminals_by_systemname(self, name: str) -> list[dict[str, any]]: + """Returns all terminals with the specified system name. + + Args: + name (str): The name of the system. + + Returns: + list[dict[str, any]]: A list of terminals matching the system name. + """ + system = self._get_system_by_name(name) + return self._get_terminals_by_systemcode(system["id"]) if system else [] diff --git a/templates/migration/1_5_0/skills/vision_ai/default_config.yaml b/templates/migration/1_5_0/skills/vision_ai/default_config.yaml new file mode 100644 index 00000000..564eb0a8 --- /dev/null +++ b/templates/migration/1_5_0/skills/vision_ai/default_config.yaml @@ -0,0 +1,28 @@ +name: VisionAI +module: skills.vision_ai.main +category: general +description: + en: Let your Wingman analyse whatever is on your screen. + de: Lass deinen Wingman alles analysieren, was auf deinem Bildschirm zu sehen ist. +examples: + - question: + en: What is on my screen? + de: Was siehst du auf meinem Bildschirm? + answer: + en: I see Spotify running and playing music. + de: Ich sehe die Spotify-App, die Musik abspielt. +prompt: | + You can also see what the user is seeing and you can analyse it and answer all questions about what you see. + Use the tool 'analyse_what_you_or_user_sees' if you are asked to analyse what you see or whtat the user sees. + You can also see the screen of the user. Call 'analyse_what_you_or_user_sees' for this, too. +custom_properties: + - id: display + name: Display to capture + property_type: number + required: true + value: 1 + - id: show_screenshots + name: Show screenshots + property_type: boolean + required: true + value: true diff --git a/templates/migration/1_5_0/skills/vision_ai/logo.png b/templates/migration/1_5_0/skills/vision_ai/logo.png new file mode 100644 index 00000000..440cd3be Binary files /dev/null and b/templates/migration/1_5_0/skills/vision_ai/logo.png differ diff --git a/templates/migration/1_5_0/skills/vision_ai/main.py b/templates/migration/1_5_0/skills/vision_ai/main.py new file mode 100644 index 00000000..8f51673d --- /dev/null +++ b/templates/migration/1_5_0/skills/vision_ai/main.py @@ -0,0 +1,173 @@ +import base64 +import io +from typing import TYPE_CHECKING +from mss import mss +from PIL import Image +from api.enums import LogSource, LogType +from api.interface import SettingsConfig, SkillConfig, WingmanInitializationError +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class VisionAI(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + self.display = 1 + self.show_screenshots = False + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + self.display = self.retrieve_custom_property_value("display", errors) + self.show_screenshots = self.retrieve_custom_property_value( + "show_screenshots", errors + ) + + return errors + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "analyse_what_you_or_user_sees", + { + "type": "function", + "function": { + "name": "analyse_what_you_or_user_sees", + "description": "Analyse what you or the user sees and answer questions about it.", + "parameters": { + "type": "object", + "properties": { + "question": { + "type": "string", + "description": "The question to answer about the image.", + } + }, + "required": ["question"], + }, + }, + }, + ), + ] + return tools + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + function_response = "" + instant_response = "" + + if tool_name == "analyse_what_you_or_user_sees": + # Take a screenshot + with mss() as sct: + main_display = sct.monitors[self.display] + screenshot = sct.grab(main_display) + + # Create a PIL image from array + image = Image.frombytes( + "RGB", screenshot.size, screenshot.bgra, "raw", "BGRX" + ) + + desired_width = 1000 + aspect_ratio = image.height / image.width + new_height = int(desired_width * aspect_ratio) + + resized_image = image.resize((desired_width, new_height)) + + png_base64 = self.pil_image_to_base64(resized_image) + + if self.show_screenshots: + await self.printr.print_async( + "Analyzing this image", + color=LogType.INFO, + source=LogSource.WINGMAN, + source_name=self.wingman.name, + skill_name=self.name, + additional_data={"image": png_base64}, + ) + + question = parameters.get("question", "What's in this image?") + + messages = [ + { + "role": "system", + "content": """ + You are a helpful ai assistant. + """, + }, + { + "role": "user", + "content": [ + {"type": "text", "text": question}, + { + "type": "image_url", + "image_url": { + "url": f"data:image/jpeg;base64,{png_base64}", + "detail": "high", + }, + }, + ], + }, + ] + completion = await self.llm_call(messages) + answer = ( + completion.choices[0].message.content + if completion and completion.choices + else "" + ) + + if answer: + if self.settings.debug_mode: + await self.printr.print_async(f"Vision analysis: {answer}.", color=LogType.INFO) + function_response = answer + + return function_response, instant_response + + async def is_summarize_needed(self, tool_name: str) -> bool: + """Returns whether a tool needs to be summarized.""" + return True + + async def is_waiting_response_needed(self, tool_name: str) -> bool: + """Returns whether a tool probably takes long and a message should be printet in between.""" + return True + + def pil_image_to_base64(self, pil_image): + """ + Convert a PIL image to a base64 encoded string. + + :param pil_image: PIL Image object + :return: Base64 encoded string of the image + """ + # Create a bytes buffer to hold the image data + buffer = io.BytesIO() + # Save the PIL image to the bytes buffer in PNG format + pil_image.save(buffer, format="PNG") + # Get the byte data from the buffer + + # Encode the byte data to Base64 + base64_encoded_data = base64.b64encode(buffer.getvalue()) + # Convert the base64 bytes to a string + base64_string = base64_encoded_data.decode("utf-8") + + return base64_string + + def convert_png_to_base64(self, png_data): + """ + Convert raw PNG data to a base64 encoded string. + + :param png_data: A bytes object containing the raw PNG data + :return: A base64 encoded string. + """ + # Encode the PNG data to base64 + base64_encoded_data = base64.b64encode(png_data) + # Convert the base64 bytes to a string + base64_string = base64_encoded_data.decode("utf-8") + return base64_string diff --git a/templates/migration/1_5_0/skills/voice_changer/default_config.yaml b/templates/migration/1_5_0/skills/voice_changer/default_config.yaml new file mode 100644 index 00000000..eb9ca7f9 --- /dev/null +++ b/templates/migration/1_5_0/skills/voice_changer/default_config.yaml @@ -0,0 +1,34 @@ +name: VoiceChanger +module: skills.voice_changer.main +category: general +description: + en: Changes the voice of your Wingman automatically. Customize it to your liking. + de: Wechselt die Stimme deines Wingman automatisch. Konfigurierbar nach eigenen Vorlieben. +custom_properties: + - id: voice_changer_interval + name: Switching Interval + hint: The interval in seconds in which the voice should be changed. (Calculated from last interaction) + value: 180 + required: true + property_type: number + - id: voice_changer_clearhistory + hint: Enable this to clear the message history (memory) when the voice is changed. + name: Clear history on voice switch + value: true + required: true + property_type: boolean + - id: voice_changer_voices + name: Available voices + hint: The voices your Wingman can use. + value: [] + required: false + property_type: voice_selection + options: + - label: "multiple" + value: true + - id: voice_changer_personalityprompt + name: Personality Prompt + hint: A prompt used on voice change to generate a new personality. Leave empty to disable. + required: false + value: "" + property_type: textarea diff --git a/templates/migration/1_5_0/skills/voice_changer/logo.png b/templates/migration/1_5_0/skills/voice_changer/logo.png new file mode 100644 index 00000000..8bcceddf Binary files /dev/null and b/templates/migration/1_5_0/skills/voice_changer/logo.png differ diff --git a/templates/migration/1_5_0/skills/voice_changer/main.py b/templates/migration/1_5_0/skills/voice_changer/main.py new file mode 100644 index 00000000..47c422c7 --- /dev/null +++ b/templates/migration/1_5_0/skills/voice_changer/main.py @@ -0,0 +1,270 @@ +import time +from random import randrange +from typing import TYPE_CHECKING +from api.interface import ( + SettingsConfig, + SkillConfig, + VoiceSelection, + WingmanInitializationError, +) +from api.enums import ( + LogType, + TtsProvider, + WingmanProTtsProvider, +) +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class VoiceChanger(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + self.voice_switching = True + self.voices = [] + self.voice_timespan = 0 + self.voice_last_message = None + self.voice_current_index = None + self.clear_history = False + + self.context_generation = True + self.context_prompt = None + self.context_personality = "" + self.context_personality_next = "" + + self.active = False + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + self.voice_timespan = self.retrieve_custom_property_value( + "voice_changer_interval", errors + ) + if not self.voice_timespan or self.voice_timespan < 0: + self.voice_switching = False + + self.clear_history = self.retrieve_custom_property_value( + "voice_changer_clearhistory", errors + ) + + self.context_prompt = self.retrieve_custom_property_value( + "voice_changer_personalityprompt", errors + ) + if not self.context_prompt: + self.context_generation = False + + # prepare voices + voices: list[VoiceSelection] = self.retrieve_custom_property_value( + "voice_changer_voices", errors + ) + if not voices or len(voices) == 0: + self.voice_switching = False + else: + # we have to initiate all providers here + initiated_providers = [] + initiate_provider_error = False + + for voice in voices: + voice_provider = voice.provider + if voice_provider not in initiated_providers: + initiated_providers.append(voice_provider) + + # initiate provider + if voice_provider == TtsProvider.OPENAI and not self.wingman.openai: + await self.wingman.validate_and_set_openai(errors) + if len(errors) > 0: + initiate_provider_error = True + elif ( + voice_provider == TtsProvider.AZURE + and not self.wingman.openai_azure + ): + await self.wingman.validate_and_set_azure(errors) + if len(errors) > 0: + initiate_provider_error = True + elif ( + voice_provider == TtsProvider.ELEVENLABS + and not self.wingman.elevenlabs + ): + await self.wingman.validate_and_set_elevenlabs(errors) + if len(errors) > 0: + initiate_provider_error = True + elif ( + voice_provider == TtsProvider.WINGMAN_PRO + and not self.wingman.wingman_pro + ): + await self.wingman.validate_and_set_wingman_pro() + + if not initiate_provider_error: + self.voices = voices + else: + self.voice_switching = False + + return errors + + async def prepare(self) -> None: + self.active = True + + # prepare first personality + if self.context_generation: + self.threaded_execution(self._generate_new_context) + + async def unload(self) -> None: + self.active = False + + async def on_add_user_message(self, message: str): + if not self.active: + return + + if self.voice_last_message is None: + await self._initiate_change() + self.voice_last_message = time.time() + return + + last_message_diff = time.time() - self.voice_last_message + last_message_diff = round(last_message_diff, 0) + self.voice_last_message = time.time() + + if last_message_diff >= self.voice_timespan: + await self._initiate_change() + + async def _initiate_change(self): + messages = [] + if self.voice_switching: + messages.append(self._switch_voice()) + if self.context_generation: + messages.append(self._switch_personality()) + if self.clear_history: + self.wingman.reset_conversation_history() + + # sort out empty messages + messages = [await message for message in messages if message] + + if messages: + await self.printr.print_async( + text="\n".join(messages), + color=LogType.INFO, + source_name=self.wingman.name, + ) + + async def _switch_voice(self) -> str: + """Switch voice to the given voice setting.""" + + # choose voice + while True: + index = randrange(len(self.voices)) - 1 + if ( + self.voice_current_index is None + or len(self.voices) == 1 + or index != self.voice_current_index + ): + self.voice_current_index = index + voice_setting = self.voices[index] + break + + if not voice_setting: + await self.printr.print_async( + "Voice switching failed due to missing voice settings.", + LogType.ERROR, + ) + return "Voice switching failed due to missing voice settings." + + voice_provider = voice_setting.provider + voice = voice_setting.voice + voice_name = None + error = False + + if voice_provider == TtsProvider.WINGMAN_PRO: + if voice_setting.subprovider == WingmanProTtsProvider.OPENAI: + voice_name = voice.value + provider_name = "Wingman Pro / OpenAI" + self.wingman.config.openai.tts_voice = voice + elif voice_setting.subprovider == WingmanProTtsProvider.AZURE: + voice_name = voice + provider_name = "Wingman Pro / Azure TTS" + self.wingman.config.azure.tts.voice = voice + elif voice_provider == TtsProvider.OPENAI: + voice_name = voice.value + provider_name = "OpenAI" + self.wingman.config.openai.tts_voice = voice + elif voice_provider == TtsProvider.ELEVENLABS: + voice_name = voice.name or voice.id + provider_name = "Elevenlabs" + self.wingman.config.elevenlabs.voice = voice + self.wingman.config.elevenlabs.output_streaming = False + elif voice_provider == TtsProvider.AZURE: + voice_name = voice + provider_name = "Azure TTS" + self.wingman.config.azure.tts.voice = voice + elif voice_provider == TtsProvider.XVASYNTH: + voice_name = voice.voice_name + provider_name = "XVASynth" + self.wingman.config.xvasynth.voice = voice + elif voice_provider == TtsProvider.EDGE_TTS: + voice_name = voice + provider_name = "Edge TTS" + self.wingman.config.edge_tts.voice = voice + else: + error = True + + if error or not voice_name or not voice_provider: + await self.printr.print_async( + "Voice switching failed due to an unknown voice provider/subprovider. Setting: {voice_setting}", + LogType.ERROR, + ) + return f"Voice switching failed due to an unknown voice provider/subprovider. Setting: {voice_setting}" + + self.wingman.config.features.tts_provider = voice_provider + + return f"Switched {self.wingman.name}'s voice to {voice_name} ({provider_name})" + + async def _switch_personality(self) -> str: + # if no next context is available, generate a new one + if not self.context_personality_next: + await self._generate_new_context() + + self.context_personality = self.context_personality_next + self.context_personality_next = "" + + self.threaded_execution(self._generate_new_context) + + return "Switched personality context." + + async def _generate_new_context(self): + messages = [ + { + "role": "system", + "content": """ + Generate new context based on the input in the \"You\"-perspective. + Like \"You are a grumpy...\" or \"You are an enthusiastic...\" and so on. + Only output the personality description without additional context or commentary. + """, + }, + { + "role": "user", + "content": self.context_prompt, + }, + ] + completion = await self.llm_call(messages) + generated_context = ( + completion.choices[0].message.content + if completion and completion.choices + else "" + ) + + self.context_personality_next = generated_context + + async def get_prompt(self) -> str | None: + prompts = [] + if self.config.prompt: + prompts.append(self.config.prompt) + if self.context_generation: + prompts.append(self.context_personality) + return " ".join(prompts) if prompts else None diff --git a/templates/migration/1_5_0/skills/web_search/default_config.yaml b/templates/migration/1_5_0/skills/web_search/default_config.yaml new file mode 100644 index 00000000..f3f04bd2 --- /dev/null +++ b/templates/migration/1_5_0/skills/web_search/default_config.yaml @@ -0,0 +1,27 @@ +name: WebSearch +module: skills.web_search.main +category: general +description: + en: Searches the web using the DuckDuckGo search engine. + de: Nutze die DuckDuckGo Suchmaschine um Daten aus dem Internet zu laden. +hint: + en: Start your queries with "Search the web for..." or "Search the internet for..." + de: Starte deine Anfragen mit 'Suche im Web nach...' oder 'Suche im Internet nach...'. +examples: + - question: + en: Search the internet for recent news about Star Citizen and give me the highlights. + de: Suche im Internet nach aktuellen Neuigkeiten zu Star Citizen und fasse sie zusammen. + answer: + en: I found the following news about Star Citizen... + de: Ich habe folgende Neuigkeiten zu Star Citizen gefunden... +prompt: | + You can also search the internet for topics identified by the user by using your `web_search_function` tool. + + Examples indicating that the user wants to search the internet are: + + - "Search the web for..." or "Search the internet for..." + - "Use DuckDuckGo to search for..." + - "Find more information about..." + - "What is the latest news about..." (or any other mention of current or recent information) + - How is the weather forecast for... (or any other mention of future information) + - The user is asking a question that can be answered by searching the internet and is not part of your general knowledge. diff --git a/templates/migration/1_5_0/skills/web_search/logo.png b/templates/migration/1_5_0/skills/web_search/logo.png new file mode 100644 index 00000000..3411ad07 Binary files /dev/null and b/templates/migration/1_5_0/skills/web_search/logo.png differ diff --git a/templates/migration/1_5_0/skills/web_search/main.py b/templates/migration/1_5_0/skills/web_search/main.py new file mode 100644 index 00000000..da86a49f --- /dev/null +++ b/templates/migration/1_5_0/skills/web_search/main.py @@ -0,0 +1,206 @@ +import time +import math +from urllib.parse import urlparse +from copy import deepcopy +from typing import TYPE_CHECKING +from duckduckgo_search import DDGS +from trafilatura import fetch_url, extract +from trafilatura.settings import DEFAULT_CONFIG +from api.interface import SettingsConfig, SkillConfig +from api.enums import LogType +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class WebSearch(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + # Set default and custom behavior + self.max_time = 5 + self.max_results = 5 + self.min_results = 2 + self.max_result_size = 4000 + + # Set necessary trafilatura settings to match + + # Copy default config file that comes with trafilatura + self.trafilatura_config = deepcopy(DEFAULT_CONFIG) + # Change download and max redirects default in config + self.trafilatura_config["DEFAULT"][ + "DOWNLOAD_TIMEOUT" + ] = f"{math.ceil(self.max_time/2)}" + self.trafilatura_config["DEFAULT"]["MAX_REDIRECTS "] = "3" + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "web_search_function", + { + "type": "function", + "function": { + "name": "web_search_function", + "description": "Searches the internet / web for the topic identified by the user or identified by the AI to answer a user question.", + "parameters": { + "type": "object", + "properties": { + "search_query": { + "type": "string", + "description": "The topic to search the internet for.", + }, + "search_type": { + "type": "string", + "description": "The type of search to perform. Use 'news', if the user is looking for current events, weather, or recent news. Use 'general' for general detailed information about a topic. Use 'single_site' if the user has specified one particular web page that they want you to review, and then use the 'single_site_url' parameter to identify the web page. If it is not clear what type of search the user wants, ask.", + "enum": [ + "news", + "general", + "single_site", + ], + }, + "single_site_url": { + "type": "string", + "description": "If the user wants to search a single website, the specific site url that they want to search, formatted as a proper url.", + }, + }, + "required": ["search_query", "search_type"], + }, + }, + }, + ), + ] + return tools + + async def is_waiting_response_needed(self, tool_name: str) -> bool: + return True + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + function_response = "No search results found or search failed." + instant_response = "" + + if tool_name == "web_search_function": + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing web_search_function with parameters: {parameters}", + color=LogType.INFO, + ) + final_results = "" + search_query = parameters.get("search_query") + search_type = parameters.get("search_type") + site_url = parameters.get("single_site_url") + + # Since site_url is not a required parameter, it is possible the AI may not to include it even when using single site type, and instead put the web address in the query field; check if that is the case. + if not site_url and search_type == "single_site": + try: + urlparse(search_query) + site_url = search_query + except ValueError: + await self.printr.print_async( + "Tried single site search but no valid url to search.", + color=LogType.INFO, + ) + + processed_results = [] + + async def gather_information(result): + title = result.get("title") + link = result.get("url") + if search_type == "general": + link = result.get("href") + body = result.get("body") + + # If doing a deep dive on a single site get as much content as possible + if search_type == "single_site": + self.max_result_size = 20000 + else: + self.max_result_size = 4000 + # If a link is in search results or identified by the user, then use trafilatura to download its content and extract the content to text + if link: + trafilatura_url = link + trafilatura_downloaded = fetch_url( + trafilatura_url, config=self.trafilatura_config + ) + if self.settings.debug_mode: + await self.printr.print_async( + f"web_search skill analyzing website at: {link} for full content using trafilatura", + color=LogType.INFO, + ) + trafilatura_result = extract( + trafilatura_downloaded, + include_comments=False, + include_tables=False, + ) + if trafilatura_result: + processed_results.append( + title + + "\n" + + link + + "\n" + + trafilatura_result[: self.max_result_size] + ) + + else: + if self.settings.debug_mode: + await self.printr.print_async( + f"web_search skill could not extract results from website at: {link} for full content using trafilatura", + color=LogType.INFO, + ) + processed_results.append(title + "\n" + link + "\n" + body) + + if search_type == "general": + self.min_results = 2 + self.max_time = 5 + search_results = DDGS().text( + search_query, safesearch="off", max_results=self.max_results + ) + elif search_type == "news": + self.min_results = 2 + self.max_time = 5 + search_results = DDGS().news( + search_query, safesearch="off", max_results=self.max_results + ) + else: + search_results = [ + {"url": site_url, "title": "Site Requested", "body": "None found"} + ] + self.min_results = 1 + self.max_time = 30 + + self.trafilatura_config["DEFAULT"][ + "DOWNLOAD_TIMEOUT" + ] = f"{math.ceil(self.max_time/2)}" + + start_time = time.time() + + for result in search_results: + self.threaded_execution(gather_information, result) + + while ( + len(processed_results) < self.min_results + and time.time() - start_time < self.max_time + ): + time.sleep(0.1) + + final_results = "\n\n".join(processed_results) + if final_results: + if self.settings.debug_mode: + await self.printr.print_async( + f"Final web_search skill results used as context for AI response: \n\n {final_results}", + color=LogType.INFO, + ) + function_response = final_results + + if self.settings.debug_mode: + await self.print_execution_time() + + return function_response, instant_response diff --git a/templates/migration/1_5_0/skills/web_search/requirements.txt b/templates/migration/1_5_0/skills/web_search/requirements.txt new file mode 100644 index 00000000..522f0721 Binary files /dev/null and b/templates/migration/1_5_0/skills/web_search/requirements.txt differ diff --git a/templates/skills/ask_perplexity/default_config.yaml b/templates/skills/ask_perplexity/default_config.yaml new file mode 100644 index 00000000..ff992dfd --- /dev/null +++ b/templates/skills/ask_perplexity/default_config.yaml @@ -0,0 +1,31 @@ +name: AskPerplexity +module: skills.ask_perplexity.main +category: general +description: + en: Uses the Perplexity API to get up-to-date information on a wide range of topics. Perplexity is a paid service, you will need a funded account with an active API key, see https://www.perplexity.ai/settings/api + de: Verwendet die Perplexity-API, um aktuelle Informationen zu einer Vielzahl von Themen zu erhalten. Perplexity ist ein kostenpflichtiger Dienst, ein Konto mit Guthaben und aktiven API key ist notwendig, siehe https://www.perplexity.ai/settings/api +examples: + - question: + en: How is the weather today in Berlin? + de: Wie ist das Wetter heute? + answer: + en: Today, the weather in Berlin is cloudy with a high of 20°C and a ... (more details) + de: Heute ist das Wetter in Berlin bewölkt, mit einer Höchsttemperatur von 20°C und ... (mehr Details) + - question: + en: In Star Citizen mining, what is currently the best way to find quantanium? + de: Beim Mining in Star Citizen, wie finde ich aktuell am besten Quantanium? + answer: + en: To find Quantanium for mining in Star Citizen, your best bet is Lyria, as it offers ... (more details) + de: Um Quantanium im Star Citizen Universum zu finden, ist Lyria der beste Ort, da dort ... (mehr Details) +prompt: | + There is a new function: 'ask_perplexity' + Perplexity is a powerful tool that can provide you with up-to-date information on a wide range of topics. + Use it everytime the user asks a question that implies the need for up-to-date information. + Always use this if no other available skill matches the request better to get up-to-date information. +custom_properties: + - id: instant_response + name: Instant Response + hint: If set, the Perplexity answer will be used instantly and unprocessed. This is faster but will not include format and/or language guidelines set in your wingman. + value: False + required: false + property_type: boolean diff --git a/templates/skills/ask_perplexity/logo.png b/templates/skills/ask_perplexity/logo.png new file mode 100644 index 00000000..3479ef77 Binary files /dev/null and b/templates/skills/ask_perplexity/logo.png differ diff --git a/templates/skills/ask_perplexity/main.py b/templates/skills/ask_perplexity/main.py new file mode 100644 index 00000000..869d3c97 --- /dev/null +++ b/templates/skills/ask_perplexity/main.py @@ -0,0 +1,84 @@ +from api.interface import ( + WingmanInitializationError, +) +from skills.skill_base import Skill + +class AskPerplexity(Skill): + + def __init__( + self, + *args, + **kwargs, + ) -> None: + super().__init__(*args, **kwargs) + + self.instant_response = False + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + if not self.wingman.perplexity: + await self.wingman.validate_and_set_perplexity(errors) + + self.instant_response = self.retrieve_custom_property_value("instant_response", errors) + + return errors + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "ask_perplexity", + { + "type": "function", + "function": { + "name": "ask_perplexity", + "description": "Expects a question that is answered with up-to-date information from the internet.", + "parameters": { + "type": "object", + "properties": { + "question": {"type": "string"}, + }, + "required": ["question"], + }, + }, + + }, + ), + ] + return tools + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + function_response = "" + instant_response = "" + + if tool_name in ["ask_perplexity"]: + if self.settings.debug_mode: + self.start_execution_benchmark() + + if tool_name == "ask_perplexity" and "question" in parameters: + function_response = self.ask_perplexity(parameters["question"]) + if self.instant_response: + instant_response = function_response + + if self.settings.debug_mode: + await self.printr.print_async( + f"Perplexity answer: {function_response}" + ) + await self.print_execution_time() + + return function_response, instant_response + + def ask_perplexity(self, question: str) -> str: + """Uses the Perplexity API to answer a question.""" + + completion = self.wingman.perplexity.ask( + messages=[{"role": "user", "content": question}], + model=self.wingman.config.perplexity.conversation_model.value, + ) + + if completion and completion.choices: + return completion.choices[0].message.content + else: + return "Error: Unable to retrieve a response from Perplexity API." diff --git a/templates/skills/file_manager/default_config.yaml b/templates/skills/file_manager/default_config.yaml index 16dc408b..deb80aaf 100644 --- a/templates/skills/file_manager/default_config.yaml +++ b/templates/skills/file_manager/default_config.yaml @@ -2,11 +2,11 @@ name: FileManager module: skills.file_manager.main category: general description: - en: Manage local files, save, load and create directories. Supports various text-based file formats. - de: Verwalte lokale Dateien, speichere, lade oder erstelle Verzeichnisse. Unterstützt verschiedene text-basierte Formate. + en: Manage local files, save, load and create directories. Supports various text-based file formats and reading PDFs. + de: Verwalte lokale Dateien, speichere, lade und erstelle Verzeichnisse. Unterstützt verschiedene text-basierte Formate und das Lesen von PDFs. hint: - en: - de: + en: + de: examples: - question: en: Save 'Hello, World!' to hello.txt. @@ -26,9 +26,15 @@ examples: answer: en: (creates a directory named 'Projects' in the default directory) de: (erstellt ein Verzeichnis namens 'Projekte' im Standardverzeichnis) + - question: + en: Read page 5 of example.pdf. + de: Lies Seite 5 von example.pdf. + answer: + en: (loads page 5 of example.pdf and reads it into memory) + de: (lädt Seite 5 von example.pdf und liest sie in den Speicher) prompt: | You can also save text to various file formats, load text from files, or create directories as specified by the user. - You support all plain text file formats. + You support reading and writing all plain text file formats and reading PDF files. When adding text to an existing file, you follow these rules: (1) determine if it is appropriate to add a new line before the added text or ask the user if you do not know. (2) only add content to an existing file if you are sure that is what the user wants. @@ -46,4 +52,4 @@ custom_properties: name: Allow overwrite existing files property_type: boolean required: true - value: false + value: false \ No newline at end of file diff --git a/templates/skills/file_manager/dependencies/_cffi_backend.cp311-win_amd64.pyd b/templates/skills/file_manager/dependencies/_cffi_backend.cp311-win_amd64.pyd new file mode 100644 index 00000000..9bb0309f Binary files /dev/null and b/templates/skills/file_manager/dependencies/_cffi_backend.cp311-win_amd64.pyd differ diff --git a/templates/skills/file_manager/dependencies/bin/dumppdf.py b/templates/skills/file_manager/dependencies/bin/dumppdf.py new file mode 100644 index 00000000..516f0347 --- /dev/null +++ b/templates/skills/file_manager/dependencies/bin/dumppdf.py @@ -0,0 +1,482 @@ +"""Extract pdf structure in XML format""" +import logging +import os.path +import re +import sys +from typing import Any, Container, Dict, Iterable, List, Optional, TextIO, Union, cast +from argparse import ArgumentParser + +import pdfminer +from pdfminer.pdfdocument import PDFDocument, PDFNoOutlines, PDFXRefFallback +from pdfminer.pdfpage import PDFPage +from pdfminer.pdfparser import PDFParser +from pdfminer.pdftypes import ( + PDFStream, + PDFObjRef, + resolve1, + stream_value, +) +from pdfminer.pdfexceptions import ( + PDFTypeError, + PDFValueError, + PDFObjectNotFound, + PDFIOError, +) +from pdfminer.psparser import PSKeyword, PSLiteral, LIT +from pdfminer.utils import isnumber + +logging.basicConfig() +logger = logging.getLogger(__name__) + +ESC_PAT = re.compile(r'[\000-\037&<>()"\042\047\134\177-\377]') + + +def escape(s: Union[str, bytes]) -> str: + if isinstance(s, bytes): + us = str(s, "latin-1") + else: + us = s + return ESC_PAT.sub(lambda m: "&#%d;" % ord(m.group(0)), us) + + +def dumpxml(out: TextIO, obj: object, codec: Optional[str] = None) -> None: + if obj is None: + out.write("") + return + + if isinstance(obj, dict): + out.write('\n' % len(obj)) + for (k, v) in obj.items(): + out.write("%s\n" % k) + out.write("") + dumpxml(out, v) + out.write("\n") + out.write("") + return + + if isinstance(obj, list): + out.write('\n' % len(obj)) + for v in obj: + dumpxml(out, v) + out.write("\n") + out.write("") + return + + if isinstance(obj, (str, bytes)): + out.write('%s' % (len(obj), escape(obj))) + return + + if isinstance(obj, PDFStream): + if codec == "raw": + # Bug: writing bytes to text I/O. This will raise TypeError. + out.write(obj.get_rawdata()) # type: ignore [arg-type] + elif codec == "binary": + # Bug: writing bytes to text I/O. This will raise TypeError. + out.write(obj.get_data()) # type: ignore [arg-type] + else: + out.write("\n\n") + dumpxml(out, obj.attrs) + out.write("\n\n") + if codec == "text": + data = obj.get_data() + out.write('%s\n' % (len(data), escape(data))) + out.write("") + return + + if isinstance(obj, PDFObjRef): + out.write('' % obj.objid) + return + + if isinstance(obj, PSKeyword): + # Likely bug: obj.name is bytes, not str + out.write("%s" % obj.name) # type: ignore [str-bytes-safe] + return + + if isinstance(obj, PSLiteral): + # Likely bug: obj.name may be bytes, not str + out.write("%s" % obj.name) # type: ignore [str-bytes-safe] + return + + if isnumber(obj): + out.write("%s" % obj) + return + + raise PDFTypeError(obj) + + +def dumptrailers( + out: TextIO, doc: PDFDocument, show_fallback_xref: bool = False +) -> None: + for xref in doc.xrefs: + if not isinstance(xref, PDFXRefFallback) or show_fallback_xref: + out.write("\n") + dumpxml(out, xref.get_trailer()) + out.write("\n\n\n") + no_xrefs = all(isinstance(xref, PDFXRefFallback) for xref in doc.xrefs) + if no_xrefs and not show_fallback_xref: + msg = ( + "This PDF does not have an xref. Use --show-fallback-xref if " + "you want to display the content of a fallback xref that " + "contains all objects." + ) + logger.warning(msg) + return + + +def dumpallobjs( + out: TextIO, + doc: PDFDocument, + codec: Optional[str] = None, + show_fallback_xref: bool = False, +) -> None: + visited = set() + out.write("") + for xref in doc.xrefs: + for objid in xref.get_objids(): + if objid in visited: + continue + visited.add(objid) + try: + obj = doc.getobj(objid) + if obj is None: + continue + out.write('\n' % objid) + dumpxml(out, obj, codec=codec) + out.write("\n\n\n") + except PDFObjectNotFound as e: + print("not found: %r" % e) + dumptrailers(out, doc, show_fallback_xref) + out.write("") + return + + +def dumpoutline( + outfp: TextIO, + fname: str, + objids: Any, + pagenos: Container[int], + password: str = "", + dumpall: bool = False, + codec: Optional[str] = None, + extractdir: Optional[str] = None, +) -> None: + fp = open(fname, "rb") + parser = PDFParser(fp) + doc = PDFDocument(parser, password) + pages = { + page.pageid: pageno + for (pageno, page) in enumerate(PDFPage.create_pages(doc), 1) + } + + def resolve_dest(dest: object) -> Any: + if isinstance(dest, (str, bytes)): + dest = resolve1(doc.get_dest(dest)) + elif isinstance(dest, PSLiteral): + dest = resolve1(doc.get_dest(dest.name)) + if isinstance(dest, dict): + dest = dest["D"] + if isinstance(dest, PDFObjRef): + dest = dest.resolve() + return dest + + try: + outlines = doc.get_outlines() + outfp.write("\n") + for (level, title, dest, a, se) in outlines: + pageno = None + if dest: + dest = resolve_dest(dest) + pageno = pages[dest[0].objid] + elif a: + action = a + if isinstance(action, dict): + subtype = action.get("S") + if subtype and repr(subtype) == "/'GoTo'" and action.get("D"): + dest = resolve_dest(action["D"]) + pageno = pages[dest[0].objid] + s = escape(title) + outfp.write(f'\n') + if dest is not None: + outfp.write("") + dumpxml(outfp, dest) + outfp.write("\n") + if pageno is not None: + outfp.write("%r\n" % pageno) + outfp.write("\n") + outfp.write("\n") + except PDFNoOutlines: + pass + parser.close() + fp.close() + return + + +LITERAL_FILESPEC = LIT("Filespec") +LITERAL_EMBEDDEDFILE = LIT("EmbeddedFile") + + +def extractembedded(fname: str, password: str, extractdir: str) -> None: + def extract1(objid: int, obj: Dict[str, Any]) -> None: + filename = os.path.basename(obj.get("UF") or cast(bytes, obj.get("F")).decode()) + fileref = obj["EF"].get("UF") or obj["EF"].get("F") + fileobj = doc.getobj(fileref.objid) + if not isinstance(fileobj, PDFStream): + error_msg = ( + "unable to process PDF: reference for %r is not a " + "PDFStream" % filename + ) + raise PDFValueError(error_msg) + if fileobj.get("Type") is not LITERAL_EMBEDDEDFILE: + raise PDFValueError( + "unable to process PDF: reference for %r " + "is not an EmbeddedFile" % (filename) + ) + path = os.path.join(extractdir, "%.6d-%s" % (objid, filename)) + if os.path.exists(path): + raise PDFIOError("file exists: %r" % path) + print("extracting: %r" % path) + os.makedirs(os.path.dirname(path), exist_ok=True) + out = open(path, "wb") + out.write(fileobj.get_data()) + out.close() + return + + with open(fname, "rb") as fp: + parser = PDFParser(fp) + doc = PDFDocument(parser, password) + extracted_objids = set() + for xref in doc.xrefs: + for objid in xref.get_objids(): + obj = doc.getobj(objid) + if ( + objid not in extracted_objids + and isinstance(obj, dict) + and obj.get("Type") is LITERAL_FILESPEC + ): + extracted_objids.add(objid) + extract1(objid, obj) + return + + +def dumppdf( + outfp: TextIO, + fname: str, + objids: Iterable[int], + pagenos: Container[int], + password: str = "", + dumpall: bool = False, + codec: Optional[str] = None, + extractdir: Optional[str] = None, + show_fallback_xref: bool = False, +) -> None: + fp = open(fname, "rb") + parser = PDFParser(fp) + doc = PDFDocument(parser, password) + if objids: + for objid in objids: + obj = doc.getobj(objid) + dumpxml(outfp, obj, codec=codec) + if pagenos: + for (pageno, page) in enumerate(PDFPage.create_pages(doc)): + if pageno in pagenos: + if codec: + for obj in page.contents: + obj = stream_value(obj) + dumpxml(outfp, obj, codec=codec) + else: + dumpxml(outfp, page.attrs) + if dumpall: + dumpallobjs(outfp, doc, codec, show_fallback_xref) + if (not objids) and (not pagenos) and (not dumpall): + dumptrailers(outfp, doc, show_fallback_xref) + fp.close() + if codec not in ("raw", "binary"): + outfp.write("\n") + return + + +def create_parser() -> ArgumentParser: + parser = ArgumentParser(description=__doc__, add_help=True) + parser.add_argument( + "files", + type=str, + default=None, + nargs="+", + help="One or more paths to PDF files.", + ) + + parser.add_argument( + "--version", + "-v", + action="version", + version=f"pdfminer.six v{pdfminer.__version__}", + ) + parser.add_argument( + "--debug", + "-d", + default=False, + action="store_true", + help="Use debug logging level.", + ) + procedure_parser = parser.add_mutually_exclusive_group() + procedure_parser.add_argument( + "--extract-toc", + "-T", + default=False, + action="store_true", + help="Extract structure of outline", + ) + procedure_parser.add_argument( + "--extract-embedded", "-E", type=str, help="Extract embedded files" + ) + + parse_params = parser.add_argument_group( + "Parser", description="Used during PDF parsing" + ) + parse_params.add_argument( + "--page-numbers", + type=int, + default=None, + nargs="+", + help="A space-seperated list of page numbers to parse.", + ) + parse_params.add_argument( + "--pagenos", + "-p", + type=str, + help="A comma-separated list of page numbers to parse. Included for " + "legacy applications, use --page-numbers for more idiomatic " + "argument entry.", + ) + parse_params.add_argument( + "--objects", + "-i", + type=str, + help="Comma separated list of object numbers to extract", + ) + parse_params.add_argument( + "--all", + "-a", + default=False, + action="store_true", + help="If the structure of all objects should be extracted", + ) + parse_params.add_argument( + "--show-fallback-xref", + action="store_true", + help="Additionally show the fallback xref. Use this if the PDF " + "has zero or only invalid xref's. This setting is ignored if " + "--extract-toc or --extract-embedded is used.", + ) + parse_params.add_argument( + "--password", + "-P", + type=str, + default="", + help="The password to use for decrypting PDF file.", + ) + + output_params = parser.add_argument_group( + "Output", description="Used during output generation." + ) + output_params.add_argument( + "--outfile", + "-o", + type=str, + default="-", + help='Path to file where output is written. Or "-" (default) to ' + "write to stdout.", + ) + codec_parser = output_params.add_mutually_exclusive_group() + codec_parser.add_argument( + "--raw-stream", + "-r", + default=False, + action="store_true", + help="Write stream objects without encoding", + ) + codec_parser.add_argument( + "--binary-stream", + "-b", + default=False, + action="store_true", + help="Write stream objects with binary encoding", + ) + codec_parser.add_argument( + "--text-stream", + "-t", + default=False, + action="store_true", + help="Write stream objects as plain text", + ) + + return parser + + +def main(argv: Optional[List[str]] = None) -> None: + parser = create_parser() + args = parser.parse_args(args=argv) + + if args.debug: + logging.getLogger().setLevel(logging.DEBUG) + + if args.outfile == "-": + outfp = sys.stdout + else: + outfp = open(args.outfile, "w") + + if args.objects: + objids = [int(x) for x in args.objects.split(",")] + else: + objids = [] + + if args.page_numbers: + pagenos = {x - 1 for x in args.page_numbers} + elif args.pagenos: + pagenos = {int(x) - 1 for x in args.pagenos.split(",")} + else: + pagenos = set() + + password = args.password + + if args.raw_stream: + codec: Optional[str] = "raw" + elif args.binary_stream: + codec = "binary" + elif args.text_stream: + codec = "text" + else: + codec = None + + for fname in args.files: + if args.extract_toc: + dumpoutline( + outfp, + fname, + objids, + pagenos, + password=password, + dumpall=args.all, + codec=codec, + extractdir=None, + ) + elif args.extract_embedded: + extractembedded(fname, password=password, extractdir=args.extract_embedded) + else: + dumppdf( + outfp, + fname, + objids, + pagenos, + password=password, + dumpall=args.all, + codec=codec, + extractdir=None, + show_fallback_xref=args.show_fallback_xref, + ) + + outfp.close() + + +if __name__ == "__main__": + main() diff --git a/templates/skills/file_manager/dependencies/bin/normalizer.exe b/templates/skills/file_manager/dependencies/bin/normalizer.exe new file mode 100644 index 00000000..d69c7297 Binary files /dev/null and b/templates/skills/file_manager/dependencies/bin/normalizer.exe differ diff --git a/templates/skills/file_manager/dependencies/bin/pdf2txt.py b/templates/skills/file_manager/dependencies/bin/pdf2txt.py new file mode 100644 index 00000000..981bd7a6 --- /dev/null +++ b/templates/skills/file_manager/dependencies/bin/pdf2txt.py @@ -0,0 +1,317 @@ +"""A command line tool for extracting text and images from PDF and +output it to plain text, html, xml or tags.""" +import argparse +import logging +import sys +from typing import Any, Container, Iterable, List, Optional + +import pdfminer.high_level +from pdfminer.layout import LAParams +from pdfminer.utils import AnyIO +from pdfminer.pdfexceptions import PDFValueError + +logging.basicConfig() + +OUTPUT_TYPES = ((".htm", "html"), (".html", "html"), (".xml", "xml"), (".tag", "tag")) + + +def float_or_disabled(x: str) -> Optional[float]: + if x.lower().strip() == "disabled": + return None + try: + return float(x) + except ValueError: + raise argparse.ArgumentTypeError(f"invalid float value: {x}") + + +def extract_text( + files: Iterable[str] = [], + outfile: str = "-", + laparams: Optional[LAParams] = None, + output_type: str = "text", + codec: str = "utf-8", + strip_control: bool = False, + maxpages: int = 0, + page_numbers: Optional[Container[int]] = None, + password: str = "", + scale: float = 1.0, + rotation: int = 0, + layoutmode: str = "normal", + output_dir: Optional[str] = None, + debug: bool = False, + disable_caching: bool = False, + **kwargs: Any, +) -> AnyIO: + if not files: + raise PDFValueError("Must provide files to work upon!") + + if output_type == "text" and outfile != "-": + for override, alttype in OUTPUT_TYPES: + if outfile.endswith(override): + output_type = alttype + + if outfile == "-": + outfp: AnyIO = sys.stdout + if sys.stdout.encoding is not None: + codec = "utf-8" + else: + outfp = open(outfile, "wb") + + for fname in files: + with open(fname, "rb") as fp: + pdfminer.high_level.extract_text_to_fp(fp, **locals()) + return outfp + + +def create_parser() -> argparse.ArgumentParser: + parser = argparse.ArgumentParser(description=__doc__, add_help=True) + parser.add_argument( + "files", + type=str, + default=None, + nargs="+", + help="One or more paths to PDF files.", + ) + + parser.add_argument( + "--version", + "-v", + action="version", + version=f"pdfminer.six v{pdfminer.__version__}", + ) + parser.add_argument( + "--debug", + "-d", + default=False, + action="store_true", + help="Use debug logging level.", + ) + parser.add_argument( + "--disable-caching", + "-C", + default=False, + action="store_true", + help="If caching or resources, such as fonts, should be disabled.", + ) + + parse_params = parser.add_argument_group( + "Parser", description="Used during PDF parsing" + ) + parse_params.add_argument( + "--page-numbers", + type=int, + default=None, + nargs="+", + help="A space-seperated list of page numbers to parse.", + ) + parse_params.add_argument( + "--pagenos", + "-p", + type=str, + help="A comma-separated list of page numbers to parse. " + "Included for legacy applications, use --page-numbers " + "for more idiomatic argument entry.", + ) + parse_params.add_argument( + "--maxpages", + "-m", + type=int, + default=0, + help="The maximum number of pages to parse.", + ) + parse_params.add_argument( + "--password", + "-P", + type=str, + default="", + help="The password to use for decrypting PDF file.", + ) + parse_params.add_argument( + "--rotation", + "-R", + default=0, + type=int, + help="The number of degrees to rotate the PDF " + "before other types of processing.", + ) + + la_params = LAParams() # will be used for defaults + la_param_group = parser.add_argument_group( + "Layout analysis", description="Used during layout analysis." + ) + la_param_group.add_argument( + "--no-laparams", + "-n", + default=False, + action="store_true", + help="If layout analysis parameters should be ignored.", + ) + la_param_group.add_argument( + "--detect-vertical", + "-V", + default=la_params.detect_vertical, + action="store_true", + help="If vertical text should be considered during layout analysis", + ) + la_param_group.add_argument( + "--line-overlap", + type=float, + default=la_params.line_overlap, + help="If two characters have more overlap than this they " + "are considered to be on the same line. The overlap is specified " + "relative to the minimum height of both characters.", + ) + la_param_group.add_argument( + "--char-margin", + "-M", + type=float, + default=la_params.char_margin, + help="If two characters are closer together than this margin they " + "are considered to be part of the same line. The margin is " + "specified relative to the width of the character.", + ) + la_param_group.add_argument( + "--word-margin", + "-W", + type=float, + default=la_params.word_margin, + help="If two characters on the same line are further apart than this " + "margin then they are considered to be two separate words, and " + "an intermediate space will be added for readability. The margin " + "is specified relative to the width of the character.", + ) + la_param_group.add_argument( + "--line-margin", + "-L", + type=float, + default=la_params.line_margin, + help="If two lines are close together they are considered to " + "be part of the same paragraph. The margin is specified " + "relative to the height of a line.", + ) + la_param_group.add_argument( + "--boxes-flow", + "-F", + type=float_or_disabled, + default=la_params.boxes_flow, + help="Specifies how much a horizontal and vertical position of a " + "text matters when determining the order of lines. The value " + "should be within the range of -1.0 (only horizontal position " + "matters) to +1.0 (only vertical position matters). You can also " + "pass `disabled` to disable advanced layout analysis, and " + "instead return text based on the position of the bottom left " + "corner of the text box.", + ) + la_param_group.add_argument( + "--all-texts", + "-A", + default=la_params.all_texts, + action="store_true", + help="If layout analysis should be performed on text in figures.", + ) + + output_params = parser.add_argument_group( + "Output", description="Used during output generation." + ) + output_params.add_argument( + "--outfile", + "-o", + type=str, + default="-", + help="Path to file where output is written. " + 'Or "-" (default) to write to stdout.', + ) + output_params.add_argument( + "--output_type", + "-t", + type=str, + default="text", + help="Type of output to generate {text,html,xml,tag}.", + ) + output_params.add_argument( + "--codec", + "-c", + type=str, + default="utf-8", + help="Text encoding to use in output file.", + ) + output_params.add_argument( + "--output-dir", + "-O", + default=None, + help="The output directory to put extracted images in. If not given, " + "images are not extracted.", + ) + output_params.add_argument( + "--layoutmode", + "-Y", + default="normal", + type=str, + help="Type of layout to use when generating html " + "{normal,exact,loose}. If normal,each line is" + " positioned separately in the html. If exact" + ", each character is positioned separately in" + " the html. If loose, same result as normal " + "but with an additional newline after each " + "text line. Only used when output_type is html.", + ) + output_params.add_argument( + "--scale", + "-s", + type=float, + default=1.0, + help="The amount of zoom to use when generating html file. " + "Only used when output_type is html.", + ) + output_params.add_argument( + "--strip-control", + "-S", + default=False, + action="store_true", + help="Remove control statement from text. " + "Only used when output_type is xml.", + ) + + return parser + + +def parse_args(args: Optional[List[str]]) -> argparse.Namespace: + parsed_args = create_parser().parse_args(args=args) + + # Propagate parsed layout parameters to LAParams object + if parsed_args.no_laparams: + parsed_args.laparams = None + else: + parsed_args.laparams = LAParams( + line_overlap=parsed_args.line_overlap, + char_margin=parsed_args.char_margin, + line_margin=parsed_args.line_margin, + word_margin=parsed_args.word_margin, + boxes_flow=parsed_args.boxes_flow, + detect_vertical=parsed_args.detect_vertical, + all_texts=parsed_args.all_texts, + ) + + if parsed_args.page_numbers: + parsed_args.page_numbers = {x - 1 for x in parsed_args.page_numbers} + + if parsed_args.pagenos: + parsed_args.page_numbers = {int(x) - 1 for x in parsed_args.pagenos.split(",")} + + if parsed_args.output_type == "text" and parsed_args.outfile != "-": + for override, alttype in OUTPUT_TYPES: + if parsed_args.outfile.endswith(override): + parsed_args.output_type = alttype + + return parsed_args + + +def main(args: Optional[List[str]] = None) -> int: + parsed_args = parse_args(args) + outfp = extract_text(**vars(parsed_args)) + outfp.close() + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/INSTALLER b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/LICENSE b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/LICENSE new file mode 100644 index 00000000..29225eee --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/LICENSE @@ -0,0 +1,26 @@ + +Except when otherwise stated (look for LICENSE files in directories or +information at the beginning of each file) all software and +documentation is licensed as follows: + + The MIT License + + Permission is hereby granted, free of charge, to any person + obtaining a copy of this software and associated documentation + files (the "Software"), to deal in the Software without + restriction, including without limitation the rights to use, + copy, modify, merge, publish, distribute, sublicense, and/or + sell copies of the Software, and to permit persons to whom the + Software is furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included + in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. + diff --git a/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/METADATA b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/METADATA new file mode 100644 index 00000000..60b0779f --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/METADATA @@ -0,0 +1,40 @@ +Metadata-Version: 2.1 +Name: cffi +Version: 1.17.1 +Summary: Foreign Function Interface for Python calling C code. +Home-page: http://cffi.readthedocs.org +Author: Armin Rigo, Maciej Fijalkowski +Author-email: python-cffi@googlegroups.com +License: MIT +Project-URL: Documentation, http://cffi.readthedocs.org/ +Project-URL: Source Code, https://github.com/python-cffi/cffi +Project-URL: Issue Tracker, https://github.com/python-cffi/cffi/issues +Project-URL: Changelog, https://cffi.readthedocs.io/en/latest/whatsnew.html +Project-URL: Downloads, https://github.com/python-cffi/cffi/releases +Project-URL: Contact, https://groups.google.com/forum/#!forum/python-cffi +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3.12 +Classifier: Programming Language :: Python :: 3.13 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: License :: OSI Approved :: MIT License +Requires-Python: >=3.8 +License-File: LICENSE +Requires-Dist: pycparser + + +CFFI +==== + +Foreign Function Interface for Python calling C code. +Please see the `Documentation `_. + +Contact +------- + +`Mailing list `_ diff --git a/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/RECORD b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/RECORD new file mode 100644 index 00000000..b0a4eed8 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/RECORD @@ -0,0 +1,49 @@ +_cffi_backend.cp311-win_amd64.pyd,sha256=mu6Qz3mAyP9pS7P_4Gxx-H62phMDP3PjF0pzJkjTmYA,178176 +cffi-1.17.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +cffi-1.17.1.dist-info/LICENSE,sha256=BLgPWwd7vtaICM_rreteNSPyqMmpZJXFh72W3x6sKjM,1294 +cffi-1.17.1.dist-info/METADATA,sha256=avJrvo-kUNx6iXJEaZVjGXNy42QS-YfjNHdJdeiBlFc,1571 +cffi-1.17.1.dist-info/RECORD,, +cffi-1.17.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cffi-1.17.1.dist-info/WHEEL,sha256=gP9oq1B6BRaUd7LM9qWlox_06QqMeQKU8gW0ScfyBso,101 +cffi-1.17.1.dist-info/entry_points.txt,sha256=y6jTxnyeuLnL-XJcDv8uML3n6wyYiGRg8MTp_QGJ9Ho,75 +cffi-1.17.1.dist-info/top_level.txt,sha256=rE7WR3rZfNKxWI9-jn6hsHCAl7MDkB-FmuQbxWjFehQ,19 +cffi/__init__.py,sha256=H6t_ebva6EeHpUuItFLW1gbRp94eZRNJODLaWKdbx1I,513 +cffi/__pycache__/__init__.cpython-311.pyc,, +cffi/__pycache__/_imp_emulation.cpython-311.pyc,, +cffi/__pycache__/_shimmed_dist_utils.cpython-311.pyc,, +cffi/__pycache__/api.cpython-311.pyc,, +cffi/__pycache__/backend_ctypes.cpython-311.pyc,, +cffi/__pycache__/cffi_opcode.cpython-311.pyc,, +cffi/__pycache__/commontypes.cpython-311.pyc,, +cffi/__pycache__/cparser.cpython-311.pyc,, +cffi/__pycache__/error.cpython-311.pyc,, +cffi/__pycache__/ffiplatform.cpython-311.pyc,, +cffi/__pycache__/lock.cpython-311.pyc,, +cffi/__pycache__/model.cpython-311.pyc,, +cffi/__pycache__/pkgconfig.cpython-311.pyc,, +cffi/__pycache__/recompiler.cpython-311.pyc,, +cffi/__pycache__/setuptools_ext.cpython-311.pyc,, +cffi/__pycache__/vengine_cpy.cpython-311.pyc,, +cffi/__pycache__/vengine_gen.cpython-311.pyc,, +cffi/__pycache__/verifier.cpython-311.pyc,, +cffi/_cffi_errors.h,sha256=zQXt7uR_m8gUW-fI2hJg0KoSkJFwXv8RGUkEDZ177dQ,3908 +cffi/_cffi_include.h,sha256=Exhmgm9qzHWzWivjfTe0D7Xp4rPUkVxdNuwGhMTMzbw,15055 +cffi/_embedding.h,sha256=EDKw5QrLvQoe3uosXB3H1xPVTYxsn33eV3A43zsA_Fw,18787 +cffi/_imp_emulation.py,sha256=RxREG8zAbI2RPGBww90u_5fi8sWdahpdipOoPzkp7C0,2960 +cffi/_shimmed_dist_utils.py,sha256=Bjj2wm8yZbvFvWEx5AEfmqaqZyZFhYfoyLLQHkXZuao,2230 +cffi/api.py,sha256=alBv6hZQkjpmZplBphdaRn2lPO9-CORs_M7ixabvZWI,42169 +cffi/backend_ctypes.py,sha256=h5ZIzLc6BFVXnGyc9xPqZWUS7qGy7yFSDqXe68Sa8z4,42454 +cffi/cffi_opcode.py,sha256=JDV5l0R0_OadBX_uE7xPPTYtMdmpp8I9UYd6av7aiDU,5731 +cffi/commontypes.py,sha256=7N6zPtCFlvxXMWhHV08psUjdYIK2XgsN3yo5dgua_v4,2805 +cffi/cparser.py,sha256=0qI3mEzZSNVcCangoyXOoAcL-RhpQL08eG8798T024s,44789 +cffi/error.py,sha256=v6xTiS4U0kvDcy4h_BDRo5v39ZQuj-IMRYLv5ETddZs,877 +cffi/ffiplatform.py,sha256=avxFjdikYGJoEtmJO7ewVmwG_VEVl6EZ_WaNhZYCqv4,3584 +cffi/lock.py,sha256=l9TTdwMIMpi6jDkJGnQgE9cvTIR7CAntIJr8EGHt3pY,747 +cffi/model.py,sha256=W30UFQZE73jL5Mx5N81YT77us2W2iJjTm0XYfnwz1cg,21797 +cffi/parse_c_type.h,sha256=OdwQfwM9ktq6vlCB43exFQmxDBtj2MBNdK8LYl15tjw,5976 +cffi/pkgconfig.py,sha256=LP1w7vmWvmKwyqLaU1Z243FOWGNQMrgMUZrvgFuOlco,4374 +cffi/recompiler.py,sha256=sim4Tm7lamt2Jn8uzKN0wMYp6ODByk3g7of47-h9LD4,65367 +cffi/setuptools_ext.py,sha256=-ebj79lO2_AUH-kRcaja2pKY1Z_5tloGwsJgzK8P3Cc,8871 +cffi/vengine_cpy.py,sha256=8UagT6ZEOZf6Dju7_CfNulue8CnsHLEzJYhnqUhoF04,43752 +cffi/vengine_gen.py,sha256=DUlEIrDiVin1Pnhn1sfoamnS5NLqfJcOdhRoeSNeJRg,26939 +cffi/verifier.py,sha256=oX8jpaohg2Qm3aHcznidAdvrVm5N4sQYG0a3Eo5mIl4,11182 diff --git a/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/REQUESTED b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/REQUESTED new file mode 100644 index 00000000..e69de29b diff --git a/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/WHEEL b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/WHEEL new file mode 100644 index 00000000..eac371e1 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: setuptools (74.1.1) +Root-Is-Purelib: false +Tag: cp311-cp311-win_amd64 + diff --git a/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/entry_points.txt b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/entry_points.txt new file mode 100644 index 00000000..4b0274f2 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/entry_points.txt @@ -0,0 +1,2 @@ +[distutils.setup_keywords] +cffi_modules = cffi.setuptools_ext:cffi_modules diff --git a/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/top_level.txt b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/top_level.txt new file mode 100644 index 00000000..f6457795 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi-1.17.1.dist-info/top_level.txt @@ -0,0 +1,2 @@ +_cffi_backend +cffi diff --git a/templates/skills/file_manager/dependencies/cffi/__init__.py b/templates/skills/file_manager/dependencies/cffi/__init__.py new file mode 100644 index 00000000..2e35a38c --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/__init__.py @@ -0,0 +1,14 @@ +__all__ = ['FFI', 'VerificationError', 'VerificationMissing', 'CDefError', + 'FFIError'] + +from .api import FFI +from .error import CDefError, FFIError, VerificationError, VerificationMissing +from .error import PkgConfigError + +__version__ = "1.17.1" +__version_info__ = (1, 17, 1) + +# The verifier module file names are based on the CRC32 of a string that +# contains the following version number. It may be older than __version__ +# if nothing is clearly incompatible. +__version_verifier_modules__ = "0.8.6" diff --git a/templates/skills/file_manager/dependencies/cffi/_cffi_errors.h b/templates/skills/file_manager/dependencies/cffi/_cffi_errors.h new file mode 100644 index 00000000..158e0590 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/_cffi_errors.h @@ -0,0 +1,149 @@ +#ifndef CFFI_MESSAGEBOX +# ifdef _MSC_VER +# define CFFI_MESSAGEBOX 1 +# else +# define CFFI_MESSAGEBOX 0 +# endif +#endif + + +#if CFFI_MESSAGEBOX +/* Windows only: logic to take the Python-CFFI embedding logic + initialization errors and display them in a background thread + with MessageBox. The idea is that if the whole program closes + as a result of this problem, then likely it is already a console + program and you can read the stderr output in the console too. + If it is not a console program, then it will likely show its own + dialog to complain, or generally not abruptly close, and for this + case the background thread should stay alive. +*/ +static void *volatile _cffi_bootstrap_text; + +static PyObject *_cffi_start_error_capture(void) +{ + PyObject *result = NULL; + PyObject *x, *m, *bi; + + if (InterlockedCompareExchangePointer(&_cffi_bootstrap_text, + (void *)1, NULL) != NULL) + return (PyObject *)1; + + m = PyImport_AddModule("_cffi_error_capture"); + if (m == NULL) + goto error; + + result = PyModule_GetDict(m); + if (result == NULL) + goto error; + +#if PY_MAJOR_VERSION >= 3 + bi = PyImport_ImportModule("builtins"); +#else + bi = PyImport_ImportModule("__builtin__"); +#endif + if (bi == NULL) + goto error; + PyDict_SetItemString(result, "__builtins__", bi); + Py_DECREF(bi); + + x = PyRun_String( + "import sys\n" + "class FileLike:\n" + " def write(self, x):\n" + " try:\n" + " of.write(x)\n" + " except: pass\n" + " self.buf += x\n" + " def flush(self):\n" + " pass\n" + "fl = FileLike()\n" + "fl.buf = ''\n" + "of = sys.stderr\n" + "sys.stderr = fl\n" + "def done():\n" + " sys.stderr = of\n" + " return fl.buf\n", /* make sure the returned value stays alive */ + Py_file_input, + result, result); + Py_XDECREF(x); + + error: + if (PyErr_Occurred()) + { + PyErr_WriteUnraisable(Py_None); + PyErr_Clear(); + } + return result; +} + +#pragma comment(lib, "user32.lib") + +static DWORD WINAPI _cffi_bootstrap_dialog(LPVOID ignored) +{ + Sleep(666); /* may be interrupted if the whole process is closing */ +#if PY_MAJOR_VERSION >= 3 + MessageBoxW(NULL, (wchar_t *)_cffi_bootstrap_text, + L"Python-CFFI error", + MB_OK | MB_ICONERROR); +#else + MessageBoxA(NULL, (char *)_cffi_bootstrap_text, + "Python-CFFI error", + MB_OK | MB_ICONERROR); +#endif + _cffi_bootstrap_text = NULL; + return 0; +} + +static void _cffi_stop_error_capture(PyObject *ecap) +{ + PyObject *s; + void *text; + + if (ecap == (PyObject *)1) + return; + + if (ecap == NULL) + goto error; + + s = PyRun_String("done()", Py_eval_input, ecap, ecap); + if (s == NULL) + goto error; + + /* Show a dialog box, but in a background thread, and + never show multiple dialog boxes at once. */ +#if PY_MAJOR_VERSION >= 3 + text = PyUnicode_AsWideCharString(s, NULL); +#else + text = PyString_AsString(s); +#endif + + _cffi_bootstrap_text = text; + + if (text != NULL) + { + HANDLE h; + h = CreateThread(NULL, 0, _cffi_bootstrap_dialog, + NULL, 0, NULL); + if (h != NULL) + CloseHandle(h); + } + /* decref the string, but it should stay alive as 'fl.buf' + in the small module above. It will really be freed only if + we later get another similar error. So it's a leak of at + most one copy of the small module. That's fine for this + situation which is usually a "fatal error" anyway. */ + Py_DECREF(s); + PyErr_Clear(); + return; + + error: + _cffi_bootstrap_text = NULL; + PyErr_Clear(); +} + +#else + +static PyObject *_cffi_start_error_capture(void) { return NULL; } +static void _cffi_stop_error_capture(PyObject *ecap) { } + +#endif diff --git a/templates/skills/file_manager/dependencies/cffi/_cffi_include.h b/templates/skills/file_manager/dependencies/cffi/_cffi_include.h new file mode 100644 index 00000000..908a1d73 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/_cffi_include.h @@ -0,0 +1,389 @@ +#define _CFFI_ + +/* We try to define Py_LIMITED_API before including Python.h. + + Mess: we can only define it if Py_DEBUG, Py_TRACE_REFS and + Py_REF_DEBUG are not defined. This is a best-effort approximation: + we can learn about Py_DEBUG from pyconfig.h, but it is unclear if + the same works for the other two macros. Py_DEBUG implies them, + but not the other way around. + + The implementation is messy (issue #350): on Windows, with _MSC_VER, + we have to define Py_LIMITED_API even before including pyconfig.h. + In that case, we guess what pyconfig.h will do to the macros above, + and check our guess after the #include. + + Note that on Windows, with CPython 3.x, you need >= 3.5 and virtualenv + version >= 16.0.0. With older versions of either, you don't get a + copy of PYTHON3.DLL in the virtualenv. We can't check the version of + CPython *before* we even include pyconfig.h. ffi.set_source() puts + a ``#define _CFFI_NO_LIMITED_API'' at the start of this file if it is + running on Windows < 3.5, as an attempt at fixing it, but that's + arguably wrong because it may not be the target version of Python. + Still better than nothing I guess. As another workaround, you can + remove the definition of Py_LIMITED_API here. + + See also 'py_limited_api' in cffi/setuptools_ext.py. +*/ +#if !defined(_CFFI_USE_EMBEDDING) && !defined(Py_LIMITED_API) +# ifdef _MSC_VER +# if !defined(_DEBUG) && !defined(Py_DEBUG) && !defined(Py_TRACE_REFS) && !defined(Py_REF_DEBUG) && !defined(_CFFI_NO_LIMITED_API) +# define Py_LIMITED_API +# endif +# include + /* sanity-check: Py_LIMITED_API will cause crashes if any of these + are also defined. Normally, the Python file PC/pyconfig.h does not + cause any of these to be defined, with the exception that _DEBUG + causes Py_DEBUG. Double-check that. */ +# ifdef Py_LIMITED_API +# if defined(Py_DEBUG) +# error "pyconfig.h unexpectedly defines Py_DEBUG, but Py_LIMITED_API is set" +# endif +# if defined(Py_TRACE_REFS) +# error "pyconfig.h unexpectedly defines Py_TRACE_REFS, but Py_LIMITED_API is set" +# endif +# if defined(Py_REF_DEBUG) +# error "pyconfig.h unexpectedly defines Py_REF_DEBUG, but Py_LIMITED_API is set" +# endif +# endif +# else +# include +# if !defined(Py_DEBUG) && !defined(Py_TRACE_REFS) && !defined(Py_REF_DEBUG) && !defined(_CFFI_NO_LIMITED_API) +# define Py_LIMITED_API +# endif +# endif +#endif + +#include +#ifdef __cplusplus +extern "C" { +#endif +#include +#include "parse_c_type.h" + +/* this block of #ifs should be kept exactly identical between + c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py + and cffi/_cffi_include.h */ +#if defined(_MSC_VER) +# include /* for alloca() */ +# if _MSC_VER < 1600 /* MSVC < 2010 */ + typedef __int8 int8_t; + typedef __int16 int16_t; + typedef __int32 int32_t; + typedef __int64 int64_t; + typedef unsigned __int8 uint8_t; + typedef unsigned __int16 uint16_t; + typedef unsigned __int32 uint32_t; + typedef unsigned __int64 uint64_t; + typedef __int8 int_least8_t; + typedef __int16 int_least16_t; + typedef __int32 int_least32_t; + typedef __int64 int_least64_t; + typedef unsigned __int8 uint_least8_t; + typedef unsigned __int16 uint_least16_t; + typedef unsigned __int32 uint_least32_t; + typedef unsigned __int64 uint_least64_t; + typedef __int8 int_fast8_t; + typedef __int16 int_fast16_t; + typedef __int32 int_fast32_t; + typedef __int64 int_fast64_t; + typedef unsigned __int8 uint_fast8_t; + typedef unsigned __int16 uint_fast16_t; + typedef unsigned __int32 uint_fast32_t; + typedef unsigned __int64 uint_fast64_t; + typedef __int64 intmax_t; + typedef unsigned __int64 uintmax_t; +# else +# include +# endif +# if _MSC_VER < 1800 /* MSVC < 2013 */ +# ifndef __cplusplus + typedef unsigned char _Bool; +# endif +# endif +# define _cffi_float_complex_t _Fcomplex /* include for it */ +# define _cffi_double_complex_t _Dcomplex /* include for it */ +#else +# include +# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) +# include +# endif +# define _cffi_float_complex_t float _Complex +# define _cffi_double_complex_t double _Complex +#endif + +#ifdef __GNUC__ +# define _CFFI_UNUSED_FN __attribute__((unused)) +#else +# define _CFFI_UNUSED_FN /* nothing */ +#endif + +#ifdef __cplusplus +# ifndef _Bool + typedef bool _Bool; /* semi-hackish: C++ has no _Bool; bool is builtin */ +# endif +#endif + +/********** CPython-specific section **********/ +#ifndef PYPY_VERSION + + +#if PY_MAJOR_VERSION >= 3 +# define PyInt_FromLong PyLong_FromLong +#endif + +#define _cffi_from_c_double PyFloat_FromDouble +#define _cffi_from_c_float PyFloat_FromDouble +#define _cffi_from_c_long PyInt_FromLong +#define _cffi_from_c_ulong PyLong_FromUnsignedLong +#define _cffi_from_c_longlong PyLong_FromLongLong +#define _cffi_from_c_ulonglong PyLong_FromUnsignedLongLong +#define _cffi_from_c__Bool PyBool_FromLong + +#define _cffi_to_c_double PyFloat_AsDouble +#define _cffi_to_c_float PyFloat_AsDouble + +#define _cffi_from_c_int(x, type) \ + (((type)-1) > 0 ? /* unsigned */ \ + (sizeof(type) < sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + sizeof(type) == sizeof(long) ? \ + PyLong_FromUnsignedLong((unsigned long)x) : \ + PyLong_FromUnsignedLongLong((unsigned long long)x)) : \ + (sizeof(type) <= sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + PyLong_FromLongLong((long long)x))) + +#define _cffi_to_c_int(o, type) \ + ((type)( \ + sizeof(type) == 1 ? (((type)-1) > 0 ? (type)_cffi_to_c_u8(o) \ + : (type)_cffi_to_c_i8(o)) : \ + sizeof(type) == 2 ? (((type)-1) > 0 ? (type)_cffi_to_c_u16(o) \ + : (type)_cffi_to_c_i16(o)) : \ + sizeof(type) == 4 ? (((type)-1) > 0 ? (type)_cffi_to_c_u32(o) \ + : (type)_cffi_to_c_i32(o)) : \ + sizeof(type) == 8 ? (((type)-1) > 0 ? (type)_cffi_to_c_u64(o) \ + : (type)_cffi_to_c_i64(o)) : \ + (Py_FatalError("unsupported size for type " #type), (type)0))) + +#define _cffi_to_c_i8 \ + ((int(*)(PyObject *))_cffi_exports[1]) +#define _cffi_to_c_u8 \ + ((int(*)(PyObject *))_cffi_exports[2]) +#define _cffi_to_c_i16 \ + ((int(*)(PyObject *))_cffi_exports[3]) +#define _cffi_to_c_u16 \ + ((int(*)(PyObject *))_cffi_exports[4]) +#define _cffi_to_c_i32 \ + ((int(*)(PyObject *))_cffi_exports[5]) +#define _cffi_to_c_u32 \ + ((unsigned int(*)(PyObject *))_cffi_exports[6]) +#define _cffi_to_c_i64 \ + ((long long(*)(PyObject *))_cffi_exports[7]) +#define _cffi_to_c_u64 \ + ((unsigned long long(*)(PyObject *))_cffi_exports[8]) +#define _cffi_to_c_char \ + ((int(*)(PyObject *))_cffi_exports[9]) +#define _cffi_from_c_pointer \ + ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[10]) +#define _cffi_to_c_pointer \ + ((char *(*)(PyObject *, struct _cffi_ctypedescr *))_cffi_exports[11]) +#define _cffi_get_struct_layout \ + not used any more +#define _cffi_restore_errno \ + ((void(*)(void))_cffi_exports[13]) +#define _cffi_save_errno \ + ((void(*)(void))_cffi_exports[14]) +#define _cffi_from_c_char \ + ((PyObject *(*)(char))_cffi_exports[15]) +#define _cffi_from_c_deref \ + ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[16]) +#define _cffi_to_c \ + ((int(*)(char *, struct _cffi_ctypedescr *, PyObject *))_cffi_exports[17]) +#define _cffi_from_c_struct \ + ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[18]) +#define _cffi_to_c_wchar_t \ + ((_cffi_wchar_t(*)(PyObject *))_cffi_exports[19]) +#define _cffi_from_c_wchar_t \ + ((PyObject *(*)(_cffi_wchar_t))_cffi_exports[20]) +#define _cffi_to_c_long_double \ + ((long double(*)(PyObject *))_cffi_exports[21]) +#define _cffi_to_c__Bool \ + ((_Bool(*)(PyObject *))_cffi_exports[22]) +#define _cffi_prepare_pointer_call_argument \ + ((Py_ssize_t(*)(struct _cffi_ctypedescr *, \ + PyObject *, char **))_cffi_exports[23]) +#define _cffi_convert_array_from_object \ + ((int(*)(char *, struct _cffi_ctypedescr *, PyObject *))_cffi_exports[24]) +#define _CFFI_CPIDX 25 +#define _cffi_call_python \ + ((void(*)(struct _cffi_externpy_s *, char *))_cffi_exports[_CFFI_CPIDX]) +#define _cffi_to_c_wchar3216_t \ + ((int(*)(PyObject *))_cffi_exports[26]) +#define _cffi_from_c_wchar3216_t \ + ((PyObject *(*)(int))_cffi_exports[27]) +#define _CFFI_NUM_EXPORTS 28 + +struct _cffi_ctypedescr; + +static void *_cffi_exports[_CFFI_NUM_EXPORTS]; + +#define _cffi_type(index) ( \ + assert((((uintptr_t)_cffi_types[index]) & 1) == 0), \ + (struct _cffi_ctypedescr *)_cffi_types[index]) + +static PyObject *_cffi_init(const char *module_name, Py_ssize_t version, + const struct _cffi_type_context_s *ctx) +{ + PyObject *module, *o_arg, *new_module; + void *raw[] = { + (void *)module_name, + (void *)version, + (void *)_cffi_exports, + (void *)ctx, + }; + + module = PyImport_ImportModule("_cffi_backend"); + if (module == NULL) + goto failure; + + o_arg = PyLong_FromVoidPtr((void *)raw); + if (o_arg == NULL) + goto failure; + + new_module = PyObject_CallMethod( + module, (char *)"_init_cffi_1_0_external_module", (char *)"O", o_arg); + + Py_DECREF(o_arg); + Py_DECREF(module); + return new_module; + + failure: + Py_XDECREF(module); + return NULL; +} + + +#ifdef HAVE_WCHAR_H +typedef wchar_t _cffi_wchar_t; +#else +typedef uint16_t _cffi_wchar_t; /* same random pick as _cffi_backend.c */ +#endif + +_CFFI_UNUSED_FN static uint16_t _cffi_to_c_char16_t(PyObject *o) +{ + if (sizeof(_cffi_wchar_t) == 2) + return (uint16_t)_cffi_to_c_wchar_t(o); + else + return (uint16_t)_cffi_to_c_wchar3216_t(o); +} + +_CFFI_UNUSED_FN static PyObject *_cffi_from_c_char16_t(uint16_t x) +{ + if (sizeof(_cffi_wchar_t) == 2) + return _cffi_from_c_wchar_t((_cffi_wchar_t)x); + else + return _cffi_from_c_wchar3216_t((int)x); +} + +_CFFI_UNUSED_FN static int _cffi_to_c_char32_t(PyObject *o) +{ + if (sizeof(_cffi_wchar_t) == 4) + return (int)_cffi_to_c_wchar_t(o); + else + return (int)_cffi_to_c_wchar3216_t(o); +} + +_CFFI_UNUSED_FN static PyObject *_cffi_from_c_char32_t(unsigned int x) +{ + if (sizeof(_cffi_wchar_t) == 4) + return _cffi_from_c_wchar_t((_cffi_wchar_t)x); + else + return _cffi_from_c_wchar3216_t((int)x); +} + +union _cffi_union_alignment_u { + unsigned char m_char; + unsigned short m_short; + unsigned int m_int; + unsigned long m_long; + unsigned long long m_longlong; + float m_float; + double m_double; + long double m_longdouble; +}; + +struct _cffi_freeme_s { + struct _cffi_freeme_s *next; + union _cffi_union_alignment_u alignment; +}; + +_CFFI_UNUSED_FN static int +_cffi_convert_array_argument(struct _cffi_ctypedescr *ctptr, PyObject *arg, + char **output_data, Py_ssize_t datasize, + struct _cffi_freeme_s **freeme) +{ + char *p; + if (datasize < 0) + return -1; + + p = *output_data; + if (p == NULL) { + struct _cffi_freeme_s *fp = (struct _cffi_freeme_s *)PyObject_Malloc( + offsetof(struct _cffi_freeme_s, alignment) + (size_t)datasize); + if (fp == NULL) + return -1; + fp->next = *freeme; + *freeme = fp; + p = *output_data = (char *)&fp->alignment; + } + memset((void *)p, 0, (size_t)datasize); + return _cffi_convert_array_from_object(p, ctptr, arg); +} + +_CFFI_UNUSED_FN static void +_cffi_free_array_arguments(struct _cffi_freeme_s *freeme) +{ + do { + void *p = (void *)freeme; + freeme = freeme->next; + PyObject_Free(p); + } while (freeme != NULL); +} + +/********** end CPython-specific section **********/ +#else +_CFFI_UNUSED_FN +static void (*_cffi_call_python_org)(struct _cffi_externpy_s *, char *); +# define _cffi_call_python _cffi_call_python_org +#endif + + +#define _cffi_array_len(array) (sizeof(array) / sizeof((array)[0])) + +#define _cffi_prim_int(size, sign) \ + ((size) == 1 ? ((sign) ? _CFFI_PRIM_INT8 : _CFFI_PRIM_UINT8) : \ + (size) == 2 ? ((sign) ? _CFFI_PRIM_INT16 : _CFFI_PRIM_UINT16) : \ + (size) == 4 ? ((sign) ? _CFFI_PRIM_INT32 : _CFFI_PRIM_UINT32) : \ + (size) == 8 ? ((sign) ? _CFFI_PRIM_INT64 : _CFFI_PRIM_UINT64) : \ + _CFFI__UNKNOWN_PRIM) + +#define _cffi_prim_float(size) \ + ((size) == sizeof(float) ? _CFFI_PRIM_FLOAT : \ + (size) == sizeof(double) ? _CFFI_PRIM_DOUBLE : \ + (size) == sizeof(long double) ? _CFFI__UNKNOWN_LONG_DOUBLE : \ + _CFFI__UNKNOWN_FLOAT_PRIM) + +#define _cffi_check_int(got, got_nonpos, expected) \ + ((got_nonpos) == (expected <= 0) && \ + (got) == (unsigned long long)expected) + +#ifdef MS_WIN32 +# define _cffi_stdcall __stdcall +#else +# define _cffi_stdcall /* nothing */ +#endif + +#ifdef __cplusplus +} +#endif diff --git a/templates/skills/file_manager/dependencies/cffi/_embedding.h b/templates/skills/file_manager/dependencies/cffi/_embedding.h new file mode 100644 index 00000000..94d8b30a --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/_embedding.h @@ -0,0 +1,550 @@ + +/***** Support code for embedding *****/ + +#ifdef __cplusplus +extern "C" { +#endif + + +#if defined(_WIN32) +# define CFFI_DLLEXPORT __declspec(dllexport) +#elif defined(__GNUC__) +# define CFFI_DLLEXPORT __attribute__((visibility("default"))) +#else +# define CFFI_DLLEXPORT /* nothing */ +#endif + + +/* There are two global variables of type _cffi_call_python_fnptr: + + * _cffi_call_python, which we declare just below, is the one called + by ``extern "Python"`` implementations. + + * _cffi_call_python_org, which on CPython is actually part of the + _cffi_exports[] array, is the function pointer copied from + _cffi_backend. If _cffi_start_python() fails, then this is set + to NULL; otherwise, it should never be NULL. + + After initialization is complete, both are equal. However, the + first one remains equal to &_cffi_start_and_call_python until the + very end of initialization, when we are (or should be) sure that + concurrent threads also see a completely initialized world, and + only then is it changed. +*/ +#undef _cffi_call_python +typedef void (*_cffi_call_python_fnptr)(struct _cffi_externpy_s *, char *); +static void _cffi_start_and_call_python(struct _cffi_externpy_s *, char *); +static _cffi_call_python_fnptr _cffi_call_python = &_cffi_start_and_call_python; + + +#ifndef _MSC_VER + /* --- Assuming a GCC not infinitely old --- */ +# define cffi_compare_and_swap(l,o,n) __sync_bool_compare_and_swap(l,o,n) +# define cffi_write_barrier() __sync_synchronize() +# if !defined(__amd64__) && !defined(__x86_64__) && \ + !defined(__i386__) && !defined(__i386) +# define cffi_read_barrier() __sync_synchronize() +# else +# define cffi_read_barrier() (void)0 +# endif +#else + /* --- Windows threads version --- */ +# include +# define cffi_compare_and_swap(l,o,n) \ + (InterlockedCompareExchangePointer(l,n,o) == (o)) +# define cffi_write_barrier() InterlockedCompareExchange(&_cffi_dummy,0,0) +# define cffi_read_barrier() (void)0 +static volatile LONG _cffi_dummy; +#endif + +#ifdef WITH_THREAD +# ifndef _MSC_VER +# include + static pthread_mutex_t _cffi_embed_startup_lock; +# else + static CRITICAL_SECTION _cffi_embed_startup_lock; +# endif + static char _cffi_embed_startup_lock_ready = 0; +#endif + +static void _cffi_acquire_reentrant_mutex(void) +{ + static void *volatile lock = NULL; + + while (!cffi_compare_and_swap(&lock, NULL, (void *)1)) { + /* should ideally do a spin loop instruction here, but + hard to do it portably and doesn't really matter I + think: pthread_mutex_init() should be very fast, and + this is only run at start-up anyway. */ + } + +#ifdef WITH_THREAD + if (!_cffi_embed_startup_lock_ready) { +# ifndef _MSC_VER + pthread_mutexattr_t attr; + pthread_mutexattr_init(&attr); + pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); + pthread_mutex_init(&_cffi_embed_startup_lock, &attr); +# else + InitializeCriticalSection(&_cffi_embed_startup_lock); +# endif + _cffi_embed_startup_lock_ready = 1; + } +#endif + + while (!cffi_compare_and_swap(&lock, (void *)1, NULL)) + ; + +#ifndef _MSC_VER + pthread_mutex_lock(&_cffi_embed_startup_lock); +#else + EnterCriticalSection(&_cffi_embed_startup_lock); +#endif +} + +static void _cffi_release_reentrant_mutex(void) +{ +#ifndef _MSC_VER + pthread_mutex_unlock(&_cffi_embed_startup_lock); +#else + LeaveCriticalSection(&_cffi_embed_startup_lock); +#endif +} + + +/********** CPython-specific section **********/ +#ifndef PYPY_VERSION + +#include "_cffi_errors.h" + + +#define _cffi_call_python_org _cffi_exports[_CFFI_CPIDX] + +PyMODINIT_FUNC _CFFI_PYTHON_STARTUP_FUNC(void); /* forward */ + +static void _cffi_py_initialize(void) +{ + /* XXX use initsigs=0, which "skips initialization registration of + signal handlers, which might be useful when Python is + embedded" according to the Python docs. But review and think + if it should be a user-controllable setting. + + XXX we should also give a way to write errors to a buffer + instead of to stderr. + + XXX if importing 'site' fails, CPython (any version) calls + exit(). Should we try to work around this behavior here? + */ + Py_InitializeEx(0); +} + +static int _cffi_initialize_python(void) +{ + /* This initializes Python, imports _cffi_backend, and then the + present .dll/.so is set up as a CPython C extension module. + */ + int result; + PyGILState_STATE state; + PyObject *pycode=NULL, *global_dict=NULL, *x; + PyObject *builtins; + + state = PyGILState_Ensure(); + + /* Call the initxxx() function from the present module. It will + create and initialize us as a CPython extension module, instead + of letting the startup Python code do it---it might reimport + the same .dll/.so and get maybe confused on some platforms. + It might also have troubles locating the .dll/.so again for all + I know. + */ + (void)_CFFI_PYTHON_STARTUP_FUNC(); + if (PyErr_Occurred()) + goto error; + + /* Now run the Python code provided to ffi.embedding_init_code(). + */ + pycode = Py_CompileString(_CFFI_PYTHON_STARTUP_CODE, + "", + Py_file_input); + if (pycode == NULL) + goto error; + global_dict = PyDict_New(); + if (global_dict == NULL) + goto error; + builtins = PyEval_GetBuiltins(); + if (builtins == NULL) + goto error; + if (PyDict_SetItemString(global_dict, "__builtins__", builtins) < 0) + goto error; + x = PyEval_EvalCode( +#if PY_MAJOR_VERSION < 3 + (PyCodeObject *) +#endif + pycode, global_dict, global_dict); + if (x == NULL) + goto error; + Py_DECREF(x); + + /* Done! Now if we've been called from + _cffi_start_and_call_python() in an ``extern "Python"``, we can + only hope that the Python code did correctly set up the + corresponding @ffi.def_extern() function. Otherwise, the + general logic of ``extern "Python"`` functions (inside the + _cffi_backend module) will find that the reference is still + missing and print an error. + */ + result = 0; + done: + Py_XDECREF(pycode); + Py_XDECREF(global_dict); + PyGILState_Release(state); + return result; + + error:; + { + /* Print as much information as potentially useful. + Debugging load-time failures with embedding is not fun + */ + PyObject *ecap; + PyObject *exception, *v, *tb, *f, *modules, *mod; + PyErr_Fetch(&exception, &v, &tb); + ecap = _cffi_start_error_capture(); + f = PySys_GetObject((char *)"stderr"); + if (f != NULL && f != Py_None) { + PyFile_WriteString( + "Failed to initialize the Python-CFFI embedding logic:\n\n", f); + } + + if (exception != NULL) { + PyErr_NormalizeException(&exception, &v, &tb); + PyErr_Display(exception, v, tb); + } + Py_XDECREF(exception); + Py_XDECREF(v); + Py_XDECREF(tb); + + if (f != NULL && f != Py_None) { + PyFile_WriteString("\nFrom: " _CFFI_MODULE_NAME + "\ncompiled with cffi version: 1.17.1" + "\n_cffi_backend module: ", f); + modules = PyImport_GetModuleDict(); + mod = PyDict_GetItemString(modules, "_cffi_backend"); + if (mod == NULL) { + PyFile_WriteString("not loaded", f); + } + else { + v = PyObject_GetAttrString(mod, "__file__"); + PyFile_WriteObject(v, f, 0); + Py_XDECREF(v); + } + PyFile_WriteString("\nsys.path: ", f); + PyFile_WriteObject(PySys_GetObject((char *)"path"), f, 0); + PyFile_WriteString("\n\n", f); + } + _cffi_stop_error_capture(ecap); + } + result = -1; + goto done; +} + +#if PY_VERSION_HEX < 0x03080000 +PyAPI_DATA(char *) _PyParser_TokenNames[]; /* from CPython */ +#endif + +static int _cffi_carefully_make_gil(void) +{ + /* This does the basic initialization of Python. It can be called + completely concurrently from unrelated threads. It assumes + that we don't hold the GIL before (if it exists), and we don't + hold it afterwards. + + (What it really does used to be completely different in Python 2 + and Python 3, with the Python 2 solution avoiding the spin-lock + around the Py_InitializeEx() call. However, after recent changes + to CPython 2.7 (issue #358) it no longer works. So we use the + Python 3 solution everywhere.) + + This initializes Python by calling Py_InitializeEx(). + Important: this must not be called concurrently at all. + So we use a global variable as a simple spin lock. This global + variable must be from 'libpythonX.Y.so', not from this + cffi-based extension module, because it must be shared from + different cffi-based extension modules. + + In Python < 3.8, we choose + _PyParser_TokenNames[0] as a completely arbitrary pointer value + that is never written to. The default is to point to the + string "ENDMARKER". We change it temporarily to point to the + next character in that string. (Yes, I know it's REALLY + obscure.) + + In Python >= 3.8, this string array is no longer writable, so + instead we pick PyCapsuleType.tp_version_tag. We can't change + Python < 3.8 because someone might use a mixture of cffi + embedded modules, some of which were compiled before this file + changed. + + In Python >= 3.12, this stopped working because that particular + tp_version_tag gets modified during interpreter startup. It's + arguably a bad idea before 3.12 too, but again we can't change + that because someone might use a mixture of cffi embedded + modules, and no-one reported a bug so far. In Python >= 3.12 + we go instead for PyCapsuleType.tp_as_buffer, which is supposed + to always be NULL. We write to it temporarily a pointer to + a struct full of NULLs, which is semantically the same. + */ + +#ifdef WITH_THREAD +# if PY_VERSION_HEX < 0x03080000 + char *volatile *lock = (char *volatile *)_PyParser_TokenNames; + char *old_value, *locked_value; + + while (1) { /* spin loop */ + old_value = *lock; + locked_value = old_value + 1; + if (old_value[0] == 'E') { + assert(old_value[1] == 'N'); + if (cffi_compare_and_swap(lock, old_value, locked_value)) + break; + } + else { + assert(old_value[0] == 'N'); + /* should ideally do a spin loop instruction here, but + hard to do it portably and doesn't really matter I + think: PyEval_InitThreads() should be very fast, and + this is only run at start-up anyway. */ + } + } +# else +# if PY_VERSION_HEX < 0x030C0000 + int volatile *lock = (int volatile *)&PyCapsule_Type.tp_version_tag; + int old_value, locked_value = -42; + assert(!(PyCapsule_Type.tp_flags & Py_TPFLAGS_HAVE_VERSION_TAG)); +# else + static struct ebp_s { PyBufferProcs buf; int mark; } empty_buffer_procs; + empty_buffer_procs.mark = -42; + PyBufferProcs *volatile *lock = (PyBufferProcs *volatile *) + &PyCapsule_Type.tp_as_buffer; + PyBufferProcs *old_value, *locked_value = &empty_buffer_procs.buf; +# endif + + while (1) { /* spin loop */ + old_value = *lock; + if (old_value == 0) { + if (cffi_compare_and_swap(lock, old_value, locked_value)) + break; + } + else { +# if PY_VERSION_HEX < 0x030C0000 + assert(old_value == locked_value); +# else + /* The pointer should point to a possibly different + empty_buffer_procs from another C extension module */ + assert(((struct ebp_s *)old_value)->mark == -42); +# endif + /* should ideally do a spin loop instruction here, but + hard to do it portably and doesn't really matter I + think: PyEval_InitThreads() should be very fast, and + this is only run at start-up anyway. */ + } + } +# endif +#endif + + /* call Py_InitializeEx() */ + if (!Py_IsInitialized()) { + _cffi_py_initialize(); +#if PY_VERSION_HEX < 0x03070000 + PyEval_InitThreads(); +#endif + PyEval_SaveThread(); /* release the GIL */ + /* the returned tstate must be the one that has been stored into the + autoTLSkey by _PyGILState_Init() called from Py_Initialize(). */ + } + else { +#if PY_VERSION_HEX < 0x03070000 + /* PyEval_InitThreads() is always a no-op from CPython 3.7 */ + PyGILState_STATE state = PyGILState_Ensure(); + PyEval_InitThreads(); + PyGILState_Release(state); +#endif + } + +#ifdef WITH_THREAD + /* release the lock */ + while (!cffi_compare_and_swap(lock, locked_value, old_value)) + ; +#endif + + return 0; +} + +/********** end CPython-specific section **********/ + + +#else + + +/********** PyPy-specific section **********/ + +PyMODINIT_FUNC _CFFI_PYTHON_STARTUP_FUNC(const void *[]); /* forward */ + +static struct _cffi_pypy_init_s { + const char *name; + void *func; /* function pointer */ + const char *code; +} _cffi_pypy_init = { + _CFFI_MODULE_NAME, + _CFFI_PYTHON_STARTUP_FUNC, + _CFFI_PYTHON_STARTUP_CODE, +}; + +extern int pypy_carefully_make_gil(const char *); +extern int pypy_init_embedded_cffi_module(int, struct _cffi_pypy_init_s *); + +static int _cffi_carefully_make_gil(void) +{ + return pypy_carefully_make_gil(_CFFI_MODULE_NAME); +} + +static int _cffi_initialize_python(void) +{ + return pypy_init_embedded_cffi_module(0xB011, &_cffi_pypy_init); +} + +/********** end PyPy-specific section **********/ + + +#endif + + +#ifdef __GNUC__ +__attribute__((noinline)) +#endif +static _cffi_call_python_fnptr _cffi_start_python(void) +{ + /* Delicate logic to initialize Python. This function can be + called multiple times concurrently, e.g. when the process calls + its first ``extern "Python"`` functions in multiple threads at + once. It can also be called recursively, in which case we must + ignore it. We also have to consider what occurs if several + different cffi-based extensions reach this code in parallel + threads---it is a different copy of the code, then, and we + can't have any shared global variable unless it comes from + 'libpythonX.Y.so'. + + Idea: + + * _cffi_carefully_make_gil(): "carefully" call + PyEval_InitThreads() (possibly with Py_InitializeEx() first). + + * then we use a (local) custom lock to make sure that a call to this + cffi-based extension will wait if another call to the *same* + extension is running the initialization in another thread. + It is reentrant, so that a recursive call will not block, but + only one from a different thread. + + * then we grab the GIL and (Python 2) we call Py_InitializeEx(). + At this point, concurrent calls to Py_InitializeEx() are not + possible: we have the GIL. + + * do the rest of the specific initialization, which may + temporarily release the GIL but not the custom lock. + Only release the custom lock when we are done. + */ + static char called = 0; + + if (_cffi_carefully_make_gil() != 0) + return NULL; + + _cffi_acquire_reentrant_mutex(); + + /* Here the GIL exists, but we don't have it. We're only protected + from concurrency by the reentrant mutex. */ + + /* This file only initializes the embedded module once, the first + time this is called, even if there are subinterpreters. */ + if (!called) { + called = 1; /* invoke _cffi_initialize_python() only once, + but don't set '_cffi_call_python' right now, + otherwise concurrent threads won't call + this function at all (we need them to wait) */ + if (_cffi_initialize_python() == 0) { + /* now initialization is finished. Switch to the fast-path. */ + + /* We would like nobody to see the new value of + '_cffi_call_python' without also seeing the rest of the + data initialized. However, this is not possible. But + the new value of '_cffi_call_python' is the function + 'cffi_call_python()' from _cffi_backend. So: */ + cffi_write_barrier(); + /* ^^^ we put a write barrier here, and a corresponding + read barrier at the start of cffi_call_python(). This + ensures that after that read barrier, we see everything + done here before the write barrier. + */ + + assert(_cffi_call_python_org != NULL); + _cffi_call_python = (_cffi_call_python_fnptr)_cffi_call_python_org; + } + else { + /* initialization failed. Reset this to NULL, even if it was + already set to some other value. Future calls to + _cffi_start_python() are still forced to occur, and will + always return NULL from now on. */ + _cffi_call_python_org = NULL; + } + } + + _cffi_release_reentrant_mutex(); + + return (_cffi_call_python_fnptr)_cffi_call_python_org; +} + +static +void _cffi_start_and_call_python(struct _cffi_externpy_s *externpy, char *args) +{ + _cffi_call_python_fnptr fnptr; + int current_err = errno; +#ifdef _MSC_VER + int current_lasterr = GetLastError(); +#endif + fnptr = _cffi_start_python(); + if (fnptr == NULL) { + fprintf(stderr, "function %s() called, but initialization code " + "failed. Returning 0.\n", externpy->name); + memset(args, 0, externpy->size_of_result); + } +#ifdef _MSC_VER + SetLastError(current_lasterr); +#endif + errno = current_err; + + if (fnptr != NULL) + fnptr(externpy, args); +} + + +/* The cffi_start_python() function makes sure Python is initialized + and our cffi module is set up. It can be called manually from the + user C code. The same effect is obtained automatically from any + dll-exported ``extern "Python"`` function. This function returns + -1 if initialization failed, 0 if all is OK. */ +_CFFI_UNUSED_FN +static int cffi_start_python(void) +{ + if (_cffi_call_python == &_cffi_start_and_call_python) { + if (_cffi_start_python() == NULL) + return -1; + } + cffi_read_barrier(); + return 0; +} + +#undef cffi_compare_and_swap +#undef cffi_write_barrier +#undef cffi_read_barrier + +#ifdef __cplusplus +} +#endif diff --git a/templates/skills/file_manager/dependencies/cffi/_imp_emulation.py b/templates/skills/file_manager/dependencies/cffi/_imp_emulation.py new file mode 100644 index 00000000..136abddd --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/_imp_emulation.py @@ -0,0 +1,83 @@ + +try: + # this works on Python < 3.12 + from imp import * + +except ImportError: + # this is a limited emulation for Python >= 3.12. + # Note that this is used only for tests or for the old ffi.verify(). + # This is copied from the source code of Python 3.11. + + from _imp import (acquire_lock, release_lock, + is_builtin, is_frozen) + + from importlib._bootstrap import _load + + from importlib import machinery + import os + import sys + import tokenize + + SEARCH_ERROR = 0 + PY_SOURCE = 1 + PY_COMPILED = 2 + C_EXTENSION = 3 + PY_RESOURCE = 4 + PKG_DIRECTORY = 5 + C_BUILTIN = 6 + PY_FROZEN = 7 + PY_CODERESOURCE = 8 + IMP_HOOK = 9 + + def get_suffixes(): + extensions = [(s, 'rb', C_EXTENSION) + for s in machinery.EXTENSION_SUFFIXES] + source = [(s, 'r', PY_SOURCE) for s in machinery.SOURCE_SUFFIXES] + bytecode = [(s, 'rb', PY_COMPILED) for s in machinery.BYTECODE_SUFFIXES] + return extensions + source + bytecode + + def find_module(name, path=None): + if not isinstance(name, str): + raise TypeError("'name' must be a str, not {}".format(type(name))) + elif not isinstance(path, (type(None), list)): + # Backwards-compatibility + raise RuntimeError("'path' must be None or a list, " + "not {}".format(type(path))) + + if path is None: + if is_builtin(name): + return None, None, ('', '', C_BUILTIN) + elif is_frozen(name): + return None, None, ('', '', PY_FROZEN) + else: + path = sys.path + + for entry in path: + package_directory = os.path.join(entry, name) + for suffix in ['.py', machinery.BYTECODE_SUFFIXES[0]]: + package_file_name = '__init__' + suffix + file_path = os.path.join(package_directory, package_file_name) + if os.path.isfile(file_path): + return None, package_directory, ('', '', PKG_DIRECTORY) + for suffix, mode, type_ in get_suffixes(): + file_name = name + suffix + file_path = os.path.join(entry, file_name) + if os.path.isfile(file_path): + break + else: + continue + break # Break out of outer loop when breaking out of inner loop. + else: + raise ImportError(name, name=name) + + encoding = None + if 'b' not in mode: + with open(file_path, 'rb') as file: + encoding = tokenize.detect_encoding(file.readline)[0] + file = open(file_path, mode, encoding=encoding) + return file, file_path, (suffix, mode, type_) + + def load_dynamic(name, path, file=None): + loader = machinery.ExtensionFileLoader(name, path) + spec = machinery.ModuleSpec(name=name, loader=loader, origin=path) + return _load(spec) diff --git a/templates/skills/file_manager/dependencies/cffi/_shimmed_dist_utils.py b/templates/skills/file_manager/dependencies/cffi/_shimmed_dist_utils.py new file mode 100644 index 00000000..c3d23128 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/_shimmed_dist_utils.py @@ -0,0 +1,45 @@ +""" +Temporary shim module to indirect the bits of distutils we need from setuptools/distutils while providing useful +error messages beyond `No module named 'distutils' on Python >= 3.12, or when setuptools' vendored distutils is broken. + +This is a compromise to avoid a hard-dep on setuptools for Python >= 3.12, since many users don't need runtime compilation support from CFFI. +""" +import sys + +try: + # import setuptools first; this is the most robust way to ensure its embedded distutils is available + # (the .pth shim should usually work, but this is even more robust) + import setuptools +except Exception as ex: + if sys.version_info >= (3, 12): + # Python 3.12 has no built-in distutils to fall back on, so any import problem is fatal + raise Exception("This CFFI feature requires setuptools on Python >= 3.12. The setuptools module is missing or non-functional.") from ex + + # silently ignore on older Pythons (support fallback to stdlib distutils where available) +else: + del setuptools + +try: + # bring in just the bits of distutils we need, whether they really came from setuptools or stdlib-embedded distutils + from distutils import log, sysconfig + from distutils.ccompiler import CCompiler + from distutils.command.build_ext import build_ext + from distutils.core import Distribution, Extension + from distutils.dir_util import mkpath + from distutils.errors import DistutilsSetupError, CompileError, LinkError + from distutils.log import set_threshold, set_verbosity + + if sys.platform == 'win32': + try: + # FUTURE: msvc9compiler module was removed in setuptools 74; consider removing, as it's only used by an ancient patch in `recompiler` + from distutils.msvc9compiler import MSVCCompiler + except ImportError: + MSVCCompiler = None +except Exception as ex: + if sys.version_info >= (3, 12): + raise Exception("This CFFI feature requires setuptools on Python >= 3.12. Please install the setuptools package.") from ex + + # anything older, just let the underlying distutils import error fly + raise Exception("This CFFI feature requires distutils. Please install the distutils or setuptools package.") from ex + +del sys diff --git a/templates/skills/file_manager/dependencies/cffi/api.py b/templates/skills/file_manager/dependencies/cffi/api.py new file mode 100644 index 00000000..5a474f3d --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/api.py @@ -0,0 +1,967 @@ +import sys, types +from .lock import allocate_lock +from .error import CDefError +from . import model + +try: + callable +except NameError: + # Python 3.1 + from collections import Callable + callable = lambda x: isinstance(x, Callable) + +try: + basestring +except NameError: + # Python 3.x + basestring = str + +_unspecified = object() + + + +class FFI(object): + r''' + The main top-level class that you instantiate once, or once per module. + + Example usage: + + ffi = FFI() + ffi.cdef(""" + int printf(const char *, ...); + """) + + C = ffi.dlopen(None) # standard library + -or- + C = ffi.verify() # use a C compiler: verify the decl above is right + + C.printf("hello, %s!\n", ffi.new("char[]", "world")) + ''' + + def __init__(self, backend=None): + """Create an FFI instance. The 'backend' argument is used to + select a non-default backend, mostly for tests. + """ + if backend is None: + # You need PyPy (>= 2.0 beta), or a CPython (>= 2.6) with + # _cffi_backend.so compiled. + import _cffi_backend as backend + from . import __version__ + if backend.__version__ != __version__: + # bad version! Try to be as explicit as possible. + if hasattr(backend, '__file__'): + # CPython + raise Exception("Version mismatch: this is the 'cffi' package version %s, located in %r. When we import the top-level '_cffi_backend' extension module, we get version %s, located in %r. The two versions should be equal; check your installation." % ( + __version__, __file__, + backend.__version__, backend.__file__)) + else: + # PyPy + raise Exception("Version mismatch: this is the 'cffi' package version %s, located in %r. This interpreter comes with a built-in '_cffi_backend' module, which is version %s. The two versions should be equal; check your installation." % ( + __version__, __file__, backend.__version__)) + # (If you insist you can also try to pass the option + # 'backend=backend_ctypes.CTypesBackend()', but don't + # rely on it! It's probably not going to work well.) + + from . import cparser + self._backend = backend + self._lock = allocate_lock() + self._parser = cparser.Parser() + self._cached_btypes = {} + self._parsed_types = types.ModuleType('parsed_types').__dict__ + self._new_types = types.ModuleType('new_types').__dict__ + self._function_caches = [] + self._libraries = [] + self._cdefsources = [] + self._included_ffis = [] + self._windows_unicode = None + self._init_once_cache = {} + self._cdef_version = None + self._embedding = None + self._typecache = model.get_typecache(backend) + if hasattr(backend, 'set_ffi'): + backend.set_ffi(self) + for name in list(backend.__dict__): + if name.startswith('RTLD_'): + setattr(self, name, getattr(backend, name)) + # + with self._lock: + self.BVoidP = self._get_cached_btype(model.voidp_type) + self.BCharA = self._get_cached_btype(model.char_array_type) + if isinstance(backend, types.ModuleType): + # _cffi_backend: attach these constants to the class + if not hasattr(FFI, 'NULL'): + FFI.NULL = self.cast(self.BVoidP, 0) + FFI.CData, FFI.CType = backend._get_types() + else: + # ctypes backend: attach these constants to the instance + self.NULL = self.cast(self.BVoidP, 0) + self.CData, self.CType = backend._get_types() + self.buffer = backend.buffer + + def cdef(self, csource, override=False, packed=False, pack=None): + """Parse the given C source. This registers all declared functions, + types, and global variables. The functions and global variables can + then be accessed via either 'ffi.dlopen()' or 'ffi.verify()'. + The types can be used in 'ffi.new()' and other functions. + If 'packed' is specified as True, all structs declared inside this + cdef are packed, i.e. laid out without any field alignment at all. + Alternatively, 'pack' can be a small integer, and requests for + alignment greater than that are ignored (pack=1 is equivalent to + packed=True). + """ + self._cdef(csource, override=override, packed=packed, pack=pack) + + def embedding_api(self, csource, packed=False, pack=None): + self._cdef(csource, packed=packed, pack=pack, dllexport=True) + if self._embedding is None: + self._embedding = '' + + def _cdef(self, csource, override=False, **options): + if not isinstance(csource, str): # unicode, on Python 2 + if not isinstance(csource, basestring): + raise TypeError("cdef() argument must be a string") + csource = csource.encode('ascii') + with self._lock: + self._cdef_version = object() + self._parser.parse(csource, override=override, **options) + self._cdefsources.append(csource) + if override: + for cache in self._function_caches: + cache.clear() + finishlist = self._parser._recomplete + if finishlist: + self._parser._recomplete = [] + for tp in finishlist: + tp.finish_backend_type(self, finishlist) + + def dlopen(self, name, flags=0): + """Load and return a dynamic library identified by 'name'. + The standard C library can be loaded by passing None. + Note that functions and types declared by 'ffi.cdef()' are not + linked to a particular library, just like C headers; in the + library we only look for the actual (untyped) symbols. + """ + if not (isinstance(name, basestring) or + name is None or + isinstance(name, self.CData)): + raise TypeError("dlopen(name): name must be a file name, None, " + "or an already-opened 'void *' handle") + with self._lock: + lib, function_cache = _make_ffi_library(self, name, flags) + self._function_caches.append(function_cache) + self._libraries.append(lib) + return lib + + def dlclose(self, lib): + """Close a library obtained with ffi.dlopen(). After this call, + access to functions or variables from the library will fail + (possibly with a segmentation fault). + """ + type(lib).__cffi_close__(lib) + + def _typeof_locked(self, cdecl): + # call me with the lock! + key = cdecl + if key in self._parsed_types: + return self._parsed_types[key] + # + if not isinstance(cdecl, str): # unicode, on Python 2 + cdecl = cdecl.encode('ascii') + # + type = self._parser.parse_type(cdecl) + really_a_function_type = type.is_raw_function + if really_a_function_type: + type = type.as_function_pointer() + btype = self._get_cached_btype(type) + result = btype, really_a_function_type + self._parsed_types[key] = result + return result + + def _typeof(self, cdecl, consider_function_as_funcptr=False): + # string -> ctype object + try: + result = self._parsed_types[cdecl] + except KeyError: + with self._lock: + result = self._typeof_locked(cdecl) + # + btype, really_a_function_type = result + if really_a_function_type and not consider_function_as_funcptr: + raise CDefError("the type %r is a function type, not a " + "pointer-to-function type" % (cdecl,)) + return btype + + def typeof(self, cdecl): + """Parse the C type given as a string and return the + corresponding object. + It can also be used on 'cdata' instance to get its C type. + """ + if isinstance(cdecl, basestring): + return self._typeof(cdecl) + if isinstance(cdecl, self.CData): + return self._backend.typeof(cdecl) + if isinstance(cdecl, types.BuiltinFunctionType): + res = _builtin_function_type(cdecl) + if res is not None: + return res + if (isinstance(cdecl, types.FunctionType) + and hasattr(cdecl, '_cffi_base_type')): + with self._lock: + return self._get_cached_btype(cdecl._cffi_base_type) + raise TypeError(type(cdecl)) + + def sizeof(self, cdecl): + """Return the size in bytes of the argument. It can be a + string naming a C type, or a 'cdata' instance. + """ + if isinstance(cdecl, basestring): + BType = self._typeof(cdecl) + return self._backend.sizeof(BType) + else: + return self._backend.sizeof(cdecl) + + def alignof(self, cdecl): + """Return the natural alignment size in bytes of the C type + given as a string. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.alignof(cdecl) + + def offsetof(self, cdecl, *fields_or_indexes): + """Return the offset of the named field inside the given + structure or array, which must be given as a C type name. + You can give several field names in case of nested structures. + You can also give numeric values which correspond to array + items, in case of an array type. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._typeoffsetof(cdecl, *fields_or_indexes)[1] + + def new(self, cdecl, init=None): + """Allocate an instance according to the specified C type and + return a pointer to it. The specified C type must be either a + pointer or an array: ``new('X *')`` allocates an X and returns + a pointer to it, whereas ``new('X[n]')`` allocates an array of + n X'es and returns an array referencing it (which works + mostly like a pointer, like in C). You can also use + ``new('X[]', n)`` to allocate an array of a non-constant + length n. + + The memory is initialized following the rules of declaring a + global variable in C: by default it is zero-initialized, but + an explicit initializer can be given which can be used to + fill all or part of the memory. + + When the returned object goes out of scope, the memory + is freed. In other words the returned object has + ownership of the value of type 'cdecl' that it points to. This + means that the raw data can be used as long as this object is + kept alive, but must not be used for a longer time. Be careful + about that when copying the pointer to the memory somewhere + else, e.g. into another structure. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.newp(cdecl, init) + + def new_allocator(self, alloc=None, free=None, + should_clear_after_alloc=True): + """Return a new allocator, i.e. a function that behaves like ffi.new() + but uses the provided low-level 'alloc' and 'free' functions. + + 'alloc' is called with the size as argument. If it returns NULL, a + MemoryError is raised. 'free' is called with the result of 'alloc' + as argument. Both can be either Python function or directly C + functions. If 'free' is None, then no free function is called. + If both 'alloc' and 'free' are None, the default is used. + + If 'should_clear_after_alloc' is set to False, then the memory + returned by 'alloc' is assumed to be already cleared (or you are + fine with garbage); otherwise CFFI will clear it. + """ + compiled_ffi = self._backend.FFI() + allocator = compiled_ffi.new_allocator(alloc, free, + should_clear_after_alloc) + def allocate(cdecl, init=None): + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return allocator(cdecl, init) + return allocate + + def cast(self, cdecl, source): + """Similar to a C cast: returns an instance of the named C + type initialized with the given 'source'. The source is + casted between integers or pointers of any type. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.cast(cdecl, source) + + def string(self, cdata, maxlen=-1): + """Return a Python string (or unicode string) from the 'cdata'. + If 'cdata' is a pointer or array of characters or bytes, returns + the null-terminated string. The returned string extends until + the first null character, or at most 'maxlen' characters. If + 'cdata' is an array then 'maxlen' defaults to its length. + + If 'cdata' is a pointer or array of wchar_t, returns a unicode + string following the same rules. + + If 'cdata' is a single character or byte or a wchar_t, returns + it as a string or unicode string. + + If 'cdata' is an enum, returns the value of the enumerator as a + string, or 'NUMBER' if the value is out of range. + """ + return self._backend.string(cdata, maxlen) + + def unpack(self, cdata, length): + """Unpack an array of C data of the given length, + returning a Python string/unicode/list. + + If 'cdata' is a pointer to 'char', returns a byte string. + It does not stop at the first null. This is equivalent to: + ffi.buffer(cdata, length)[:] + + If 'cdata' is a pointer to 'wchar_t', returns a unicode string. + 'length' is measured in wchar_t's; it is not the size in bytes. + + If 'cdata' is a pointer to anything else, returns a list of + 'length' items. This is a faster equivalent to: + [cdata[i] for i in range(length)] + """ + return self._backend.unpack(cdata, length) + + #def buffer(self, cdata, size=-1): + # """Return a read-write buffer object that references the raw C data + # pointed to by the given 'cdata'. The 'cdata' must be a pointer or + # an array. Can be passed to functions expecting a buffer, or directly + # manipulated with: + # + # buf[:] get a copy of it in a regular string, or + # buf[idx] as a single character + # buf[:] = ... + # buf[idx] = ... change the content + # """ + # note that 'buffer' is a type, set on this instance by __init__ + + def from_buffer(self, cdecl, python_buffer=_unspecified, + require_writable=False): + """Return a cdata of the given type pointing to the data of the + given Python object, which must support the buffer interface. + Note that this is not meant to be used on the built-in types + str or unicode (you can build 'char[]' arrays explicitly) + but only on objects containing large quantities of raw data + in some other format, like 'array.array' or numpy arrays. + + The first argument is optional and default to 'char[]'. + """ + if python_buffer is _unspecified: + cdecl, python_buffer = self.BCharA, cdecl + elif isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + return self._backend.from_buffer(cdecl, python_buffer, + require_writable) + + def memmove(self, dest, src, n): + """ffi.memmove(dest, src, n) copies n bytes of memory from src to dest. + + Like the C function memmove(), the memory areas may overlap; + apart from that it behaves like the C function memcpy(). + + 'src' can be any cdata ptr or array, or any Python buffer object. + 'dest' can be any cdata ptr or array, or a writable Python buffer + object. The size to copy, 'n', is always measured in bytes. + + Unlike other methods, this one supports all Python buffer including + byte strings and bytearrays---but it still does not support + non-contiguous buffers. + """ + return self._backend.memmove(dest, src, n) + + def callback(self, cdecl, python_callable=None, error=None, onerror=None): + """Return a callback object or a decorator making such a + callback object. 'cdecl' must name a C function pointer type. + The callback invokes the specified 'python_callable' (which may + be provided either directly or via a decorator). Important: the + callback object must be manually kept alive for as long as the + callback may be invoked from the C level. + """ + def callback_decorator_wrap(python_callable): + if not callable(python_callable): + raise TypeError("the 'python_callable' argument " + "is not callable") + return self._backend.callback(cdecl, python_callable, + error, onerror) + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl, consider_function_as_funcptr=True) + if python_callable is None: + return callback_decorator_wrap # decorator mode + else: + return callback_decorator_wrap(python_callable) # direct mode + + def getctype(self, cdecl, replace_with=''): + """Return a string giving the C type 'cdecl', which may be itself + a string or a object. If 'replace_with' is given, it gives + extra text to append (or insert for more complicated C types), like + a variable name, or '*' to get actually the C type 'pointer-to-cdecl'. + """ + if isinstance(cdecl, basestring): + cdecl = self._typeof(cdecl) + replace_with = replace_with.strip() + if (replace_with.startswith('*') + and '&[' in self._backend.getcname(cdecl, '&')): + replace_with = '(%s)' % replace_with + elif replace_with and not replace_with[0] in '[(': + replace_with = ' ' + replace_with + return self._backend.getcname(cdecl, replace_with) + + def gc(self, cdata, destructor, size=0): + """Return a new cdata object that points to the same + data. Later, when this new cdata object is garbage-collected, + 'destructor(old_cdata_object)' will be called. + + The optional 'size' gives an estimate of the size, used to + trigger the garbage collection more eagerly. So far only used + on PyPy. It tells the GC that the returned object keeps alive + roughly 'size' bytes of external memory. + """ + return self._backend.gcp(cdata, destructor, size) + + def _get_cached_btype(self, type): + assert self._lock.acquire(False) is False + # call me with the lock! + try: + BType = self._cached_btypes[type] + except KeyError: + finishlist = [] + BType = type.get_cached_btype(self, finishlist) + for type in finishlist: + type.finish_backend_type(self, finishlist) + return BType + + def verify(self, source='', tmpdir=None, **kwargs): + """Verify that the current ffi signatures compile on this + machine, and return a dynamic library object. The dynamic + library can be used to call functions and access global + variables declared in this 'ffi'. The library is compiled + by the C compiler: it gives you C-level API compatibility + (including calling macros). This is unlike 'ffi.dlopen()', + which requires binary compatibility in the signatures. + """ + from .verifier import Verifier, _caller_dir_pycache + # + # If set_unicode(True) was called, insert the UNICODE and + # _UNICODE macro declarations + if self._windows_unicode: + self._apply_windows_unicode(kwargs) + # + # Set the tmpdir here, and not in Verifier.__init__: it picks + # up the caller's directory, which we want to be the caller of + # ffi.verify(), as opposed to the caller of Veritier(). + tmpdir = tmpdir or _caller_dir_pycache() + # + # Make a Verifier() and use it to load the library. + self.verifier = Verifier(self, source, tmpdir, **kwargs) + lib = self.verifier.load_library() + # + # Save the loaded library for keep-alive purposes, even + # if the caller doesn't keep it alive itself (it should). + self._libraries.append(lib) + return lib + + def _get_errno(self): + return self._backend.get_errno() + def _set_errno(self, errno): + self._backend.set_errno(errno) + errno = property(_get_errno, _set_errno, None, + "the value of 'errno' from/to the C calls") + + def getwinerror(self, code=-1): + return self._backend.getwinerror(code) + + def _pointer_to(self, ctype): + with self._lock: + return model.pointer_cache(self, ctype) + + def addressof(self, cdata, *fields_or_indexes): + """Return the address of a . + If 'fields_or_indexes' are given, returns the address of that + field or array item in the structure or array, recursively in + case of nested structures. + """ + try: + ctype = self._backend.typeof(cdata) + except TypeError: + if '__addressof__' in type(cdata).__dict__: + return type(cdata).__addressof__(cdata, *fields_or_indexes) + raise + if fields_or_indexes: + ctype, offset = self._typeoffsetof(ctype, *fields_or_indexes) + else: + if ctype.kind == "pointer": + raise TypeError("addressof(pointer)") + offset = 0 + ctypeptr = self._pointer_to(ctype) + return self._backend.rawaddressof(ctypeptr, cdata, offset) + + def _typeoffsetof(self, ctype, field_or_index, *fields_or_indexes): + ctype, offset = self._backend.typeoffsetof(ctype, field_or_index) + for field1 in fields_or_indexes: + ctype, offset1 = self._backend.typeoffsetof(ctype, field1, 1) + offset += offset1 + return ctype, offset + + def include(self, ffi_to_include): + """Includes the typedefs, structs, unions and enums defined + in another FFI instance. Usage is similar to a #include in C, + where a part of the program might include types defined in + another part for its own usage. Note that the include() + method has no effect on functions, constants and global + variables, which must anyway be accessed directly from the + lib object returned by the original FFI instance. + """ + if not isinstance(ffi_to_include, FFI): + raise TypeError("ffi.include() expects an argument that is also of" + " type cffi.FFI, not %r" % ( + type(ffi_to_include).__name__,)) + if ffi_to_include is self: + raise ValueError("self.include(self)") + with ffi_to_include._lock: + with self._lock: + self._parser.include(ffi_to_include._parser) + self._cdefsources.append('[') + self._cdefsources.extend(ffi_to_include._cdefsources) + self._cdefsources.append(']') + self._included_ffis.append(ffi_to_include) + + def new_handle(self, x): + return self._backend.newp_handle(self.BVoidP, x) + + def from_handle(self, x): + return self._backend.from_handle(x) + + def release(self, x): + self._backend.release(x) + + def set_unicode(self, enabled_flag): + """Windows: if 'enabled_flag' is True, enable the UNICODE and + _UNICODE defines in C, and declare the types like TCHAR and LPTCSTR + to be (pointers to) wchar_t. If 'enabled_flag' is False, + declare these types to be (pointers to) plain 8-bit characters. + This is mostly for backward compatibility; you usually want True. + """ + if self._windows_unicode is not None: + raise ValueError("set_unicode() can only be called once") + enabled_flag = bool(enabled_flag) + if enabled_flag: + self.cdef("typedef wchar_t TBYTE;" + "typedef wchar_t TCHAR;" + "typedef const wchar_t *LPCTSTR;" + "typedef const wchar_t *PCTSTR;" + "typedef wchar_t *LPTSTR;" + "typedef wchar_t *PTSTR;" + "typedef TBYTE *PTBYTE;" + "typedef TCHAR *PTCHAR;") + else: + self.cdef("typedef char TBYTE;" + "typedef char TCHAR;" + "typedef const char *LPCTSTR;" + "typedef const char *PCTSTR;" + "typedef char *LPTSTR;" + "typedef char *PTSTR;" + "typedef TBYTE *PTBYTE;" + "typedef TCHAR *PTCHAR;") + self._windows_unicode = enabled_flag + + def _apply_windows_unicode(self, kwds): + defmacros = kwds.get('define_macros', ()) + if not isinstance(defmacros, (list, tuple)): + raise TypeError("'define_macros' must be a list or tuple") + defmacros = list(defmacros) + [('UNICODE', '1'), + ('_UNICODE', '1')] + kwds['define_macros'] = defmacros + + def _apply_embedding_fix(self, kwds): + # must include an argument like "-lpython2.7" for the compiler + def ensure(key, value): + lst = kwds.setdefault(key, []) + if value not in lst: + lst.append(value) + # + if '__pypy__' in sys.builtin_module_names: + import os + if sys.platform == "win32": + # we need 'libpypy-c.lib'. Current distributions of + # pypy (>= 4.1) contain it as 'libs/python27.lib'. + pythonlib = "python{0[0]}{0[1]}".format(sys.version_info) + if hasattr(sys, 'prefix'): + ensure('library_dirs', os.path.join(sys.prefix, 'libs')) + else: + # we need 'libpypy-c.{so,dylib}', which should be by + # default located in 'sys.prefix/bin' for installed + # systems. + if sys.version_info < (3,): + pythonlib = "pypy-c" + else: + pythonlib = "pypy3-c" + if hasattr(sys, 'prefix'): + ensure('library_dirs', os.path.join(sys.prefix, 'bin')) + # On uninstalled pypy's, the libpypy-c is typically found in + # .../pypy/goal/. + if hasattr(sys, 'prefix'): + ensure('library_dirs', os.path.join(sys.prefix, 'pypy', 'goal')) + else: + if sys.platform == "win32": + template = "python%d%d" + if hasattr(sys, 'gettotalrefcount'): + template += '_d' + else: + try: + import sysconfig + except ImportError: # 2.6 + from cffi._shimmed_dist_utils import sysconfig + template = "python%d.%d" + if sysconfig.get_config_var('DEBUG_EXT'): + template += sysconfig.get_config_var('DEBUG_EXT') + pythonlib = (template % + (sys.hexversion >> 24, (sys.hexversion >> 16) & 0xff)) + if hasattr(sys, 'abiflags'): + pythonlib += sys.abiflags + ensure('libraries', pythonlib) + if sys.platform == "win32": + ensure('extra_link_args', '/MANIFEST') + + def set_source(self, module_name, source, source_extension='.c', **kwds): + import os + if hasattr(self, '_assigned_source'): + raise ValueError("set_source() cannot be called several times " + "per ffi object") + if not isinstance(module_name, basestring): + raise TypeError("'module_name' must be a string") + if os.sep in module_name or (os.altsep and os.altsep in module_name): + raise ValueError("'module_name' must not contain '/': use a dotted " + "name to make a 'package.module' location") + self._assigned_source = (str(module_name), source, + source_extension, kwds) + + def set_source_pkgconfig(self, module_name, pkgconfig_libs, source, + source_extension='.c', **kwds): + from . import pkgconfig + if not isinstance(pkgconfig_libs, list): + raise TypeError("the pkgconfig_libs argument must be a list " + "of package names") + kwds2 = pkgconfig.flags_from_pkgconfig(pkgconfig_libs) + pkgconfig.merge_flags(kwds, kwds2) + self.set_source(module_name, source, source_extension, **kwds) + + def distutils_extension(self, tmpdir='build', verbose=True): + from cffi._shimmed_dist_utils import mkpath + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + if hasattr(self, 'verifier'): # fallback, 'tmpdir' ignored + return self.verifier.get_extension() + raise ValueError("set_source() must be called before" + " distutils_extension()") + module_name, source, source_extension, kwds = self._assigned_source + if source is None: + raise TypeError("distutils_extension() is only for C extension " + "modules, not for dlopen()-style pure Python " + "modules") + mkpath(tmpdir) + ext, updated = recompile(self, module_name, + source, tmpdir=tmpdir, extradir=tmpdir, + source_extension=source_extension, + call_c_compiler=False, **kwds) + if verbose: + if updated: + sys.stderr.write("regenerated: %r\n" % (ext.sources[0],)) + else: + sys.stderr.write("not modified: %r\n" % (ext.sources[0],)) + return ext + + def emit_c_code(self, filename): + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + raise ValueError("set_source() must be called before emit_c_code()") + module_name, source, source_extension, kwds = self._assigned_source + if source is None: + raise TypeError("emit_c_code() is only for C extension modules, " + "not for dlopen()-style pure Python modules") + recompile(self, module_name, source, + c_file=filename, call_c_compiler=False, + uses_ffiplatform=False, **kwds) + + def emit_python_code(self, filename): + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + raise ValueError("set_source() must be called before emit_c_code()") + module_name, source, source_extension, kwds = self._assigned_source + if source is not None: + raise TypeError("emit_python_code() is only for dlopen()-style " + "pure Python modules, not for C extension modules") + recompile(self, module_name, source, + c_file=filename, call_c_compiler=False, + uses_ffiplatform=False, **kwds) + + def compile(self, tmpdir='.', verbose=0, target=None, debug=None): + """The 'target' argument gives the final file name of the + compiled DLL. Use '*' to force distutils' choice, suitable for + regular CPython C API modules. Use a file name ending in '.*' + to ask for the system's default extension for dynamic libraries + (.so/.dll/.dylib). + + The default is '*' when building a non-embedded C API extension, + and (module_name + '.*') when building an embedded library. + """ + from .recompiler import recompile + # + if not hasattr(self, '_assigned_source'): + raise ValueError("set_source() must be called before compile()") + module_name, source, source_extension, kwds = self._assigned_source + return recompile(self, module_name, source, tmpdir=tmpdir, + target=target, source_extension=source_extension, + compiler_verbose=verbose, debug=debug, **kwds) + + def init_once(self, func, tag): + # Read _init_once_cache[tag], which is either (False, lock) if + # we're calling the function now in some thread, or (True, result). + # Don't call setdefault() in most cases, to avoid allocating and + # immediately freeing a lock; but still use setdefaut() to avoid + # races. + try: + x = self._init_once_cache[tag] + except KeyError: + x = self._init_once_cache.setdefault(tag, (False, allocate_lock())) + # Common case: we got (True, result), so we return the result. + if x[0]: + return x[1] + # Else, it's a lock. Acquire it to serialize the following tests. + with x[1]: + # Read again from _init_once_cache the current status. + x = self._init_once_cache[tag] + if x[0]: + return x[1] + # Call the function and store the result back. + result = func() + self._init_once_cache[tag] = (True, result) + return result + + def embedding_init_code(self, pysource): + if self._embedding: + raise ValueError("embedding_init_code() can only be called once") + # fix 'pysource' before it gets dumped into the C file: + # - remove empty lines at the beginning, so it starts at "line 1" + # - dedent, if all non-empty lines are indented + # - check for SyntaxErrors + import re + match = re.match(r'\s*\n', pysource) + if match: + pysource = pysource[match.end():] + lines = pysource.splitlines() or [''] + prefix = re.match(r'\s*', lines[0]).group() + for i in range(1, len(lines)): + line = lines[i] + if line.rstrip(): + while not line.startswith(prefix): + prefix = prefix[:-1] + i = len(prefix) + lines = [line[i:]+'\n' for line in lines] + pysource = ''.join(lines) + # + compile(pysource, "cffi_init", "exec") + # + self._embedding = pysource + + def def_extern(self, *args, **kwds): + raise ValueError("ffi.def_extern() is only available on API-mode FFI " + "objects") + + def list_types(self): + """Returns the user type names known to this FFI instance. + This returns a tuple containing three lists of names: + (typedef_names, names_of_structs, names_of_unions) + """ + typedefs = [] + structs = [] + unions = [] + for key in self._parser._declarations: + if key.startswith('typedef '): + typedefs.append(key[8:]) + elif key.startswith('struct '): + structs.append(key[7:]) + elif key.startswith('union '): + unions.append(key[6:]) + typedefs.sort() + structs.sort() + unions.sort() + return (typedefs, structs, unions) + + +def _load_backend_lib(backend, name, flags): + import os + if not isinstance(name, basestring): + if sys.platform != "win32" or name is not None: + return backend.load_library(name, flags) + name = "c" # Windows: load_library(None) fails, but this works + # on Python 2 (backward compatibility hack only) + first_error = None + if '.' in name or '/' in name or os.sep in name: + try: + return backend.load_library(name, flags) + except OSError as e: + first_error = e + import ctypes.util + path = ctypes.util.find_library(name) + if path is None: + if name == "c" and sys.platform == "win32" and sys.version_info >= (3,): + raise OSError("dlopen(None) cannot work on Windows for Python 3 " + "(see http://bugs.python.org/issue23606)") + msg = ("ctypes.util.find_library() did not manage " + "to locate a library called %r" % (name,)) + if first_error is not None: + msg = "%s. Additionally, %s" % (first_error, msg) + raise OSError(msg) + return backend.load_library(path, flags) + +def _make_ffi_library(ffi, libname, flags): + backend = ffi._backend + backendlib = _load_backend_lib(backend, libname, flags) + # + def accessor_function(name): + key = 'function ' + name + tp, _ = ffi._parser._declarations[key] + BType = ffi._get_cached_btype(tp) + value = backendlib.load_function(BType, name) + library.__dict__[name] = value + # + def accessor_variable(name): + key = 'variable ' + name + tp, _ = ffi._parser._declarations[key] + BType = ffi._get_cached_btype(tp) + read_variable = backendlib.read_variable + write_variable = backendlib.write_variable + setattr(FFILibrary, name, property( + lambda self: read_variable(BType, name), + lambda self, value: write_variable(BType, name, value))) + # + def addressof_var(name): + try: + return addr_variables[name] + except KeyError: + with ffi._lock: + if name not in addr_variables: + key = 'variable ' + name + tp, _ = ffi._parser._declarations[key] + BType = ffi._get_cached_btype(tp) + if BType.kind != 'array': + BType = model.pointer_cache(ffi, BType) + p = backendlib.load_function(BType, name) + addr_variables[name] = p + return addr_variables[name] + # + def accessor_constant(name): + raise NotImplementedError("non-integer constant '%s' cannot be " + "accessed from a dlopen() library" % (name,)) + # + def accessor_int_constant(name): + library.__dict__[name] = ffi._parser._int_constants[name] + # + accessors = {} + accessors_version = [False] + addr_variables = {} + # + def update_accessors(): + if accessors_version[0] is ffi._cdef_version: + return + # + for key, (tp, _) in ffi._parser._declarations.items(): + if not isinstance(tp, model.EnumType): + tag, name = key.split(' ', 1) + if tag == 'function': + accessors[name] = accessor_function + elif tag == 'variable': + accessors[name] = accessor_variable + elif tag == 'constant': + accessors[name] = accessor_constant + else: + for i, enumname in enumerate(tp.enumerators): + def accessor_enum(name, tp=tp, i=i): + tp.check_not_partial() + library.__dict__[name] = tp.enumvalues[i] + accessors[enumname] = accessor_enum + for name in ffi._parser._int_constants: + accessors.setdefault(name, accessor_int_constant) + accessors_version[0] = ffi._cdef_version + # + def make_accessor(name): + with ffi._lock: + if name in library.__dict__ or name in FFILibrary.__dict__: + return # added by another thread while waiting for the lock + if name not in accessors: + update_accessors() + if name not in accessors: + raise AttributeError(name) + accessors[name](name) + # + class FFILibrary(object): + def __getattr__(self, name): + make_accessor(name) + return getattr(self, name) + def __setattr__(self, name, value): + try: + property = getattr(self.__class__, name) + except AttributeError: + make_accessor(name) + setattr(self, name, value) + else: + property.__set__(self, value) + def __dir__(self): + with ffi._lock: + update_accessors() + return accessors.keys() + def __addressof__(self, name): + if name in library.__dict__: + return library.__dict__[name] + if name in FFILibrary.__dict__: + return addressof_var(name) + make_accessor(name) + if name in library.__dict__: + return library.__dict__[name] + if name in FFILibrary.__dict__: + return addressof_var(name) + raise AttributeError("cffi library has no function or " + "global variable named '%s'" % (name,)) + def __cffi_close__(self): + backendlib.close_lib() + self.__dict__.clear() + # + if isinstance(libname, basestring): + try: + if not isinstance(libname, str): # unicode, on Python 2 + libname = libname.encode('utf-8') + FFILibrary.__name__ = 'FFILibrary_%s' % libname + except UnicodeError: + pass + library = FFILibrary() + return library, library.__dict__ + +def _builtin_function_type(func): + # a hack to make at least ffi.typeof(builtin_function) work, + # if the builtin function was obtained by 'vengine_cpy'. + import sys + try: + module = sys.modules[func.__module__] + ffi = module._cffi_original_ffi + types_of_builtin_funcs = module._cffi_types_of_builtin_funcs + tp = types_of_builtin_funcs[func] + except (KeyError, AttributeError, TypeError): + return None + else: + with ffi._lock: + return ffi._get_cached_btype(tp) diff --git a/templates/skills/file_manager/dependencies/cffi/backend_ctypes.py b/templates/skills/file_manager/dependencies/cffi/backend_ctypes.py new file mode 100644 index 00000000..e7956a79 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/backend_ctypes.py @@ -0,0 +1,1121 @@ +import ctypes, ctypes.util, operator, sys +from . import model + +if sys.version_info < (3,): + bytechr = chr +else: + unicode = str + long = int + xrange = range + bytechr = lambda num: bytes([num]) + +class CTypesType(type): + pass + +class CTypesData(object): + __metaclass__ = CTypesType + __slots__ = ['__weakref__'] + __name__ = '' + + def __init__(self, *args): + raise TypeError("cannot instantiate %r" % (self.__class__,)) + + @classmethod + def _newp(cls, init): + raise TypeError("expected a pointer or array ctype, got '%s'" + % (cls._get_c_name(),)) + + @staticmethod + def _to_ctypes(value): + raise TypeError + + @classmethod + def _arg_to_ctypes(cls, *value): + try: + ctype = cls._ctype + except AttributeError: + raise TypeError("cannot create an instance of %r" % (cls,)) + if value: + res = cls._to_ctypes(*value) + if not isinstance(res, ctype): + res = cls._ctype(res) + else: + res = cls._ctype() + return res + + @classmethod + def _create_ctype_obj(cls, init): + if init is None: + return cls._arg_to_ctypes() + else: + return cls._arg_to_ctypes(init) + + @staticmethod + def _from_ctypes(ctypes_value): + raise TypeError + + @classmethod + def _get_c_name(cls, replace_with=''): + return cls._reftypename.replace(' &', replace_with) + + @classmethod + def _fix_class(cls): + cls.__name__ = 'CData<%s>' % (cls._get_c_name(),) + cls.__qualname__ = 'CData<%s>' % (cls._get_c_name(),) + cls.__module__ = 'ffi' + + def _get_own_repr(self): + raise NotImplementedError + + def _addr_repr(self, address): + if address == 0: + return 'NULL' + else: + if address < 0: + address += 1 << (8*ctypes.sizeof(ctypes.c_void_p)) + return '0x%x' % address + + def __repr__(self, c_name=None): + own = self._get_own_repr() + return '' % (c_name or self._get_c_name(), own) + + def _convert_to_address(self, BClass): + if BClass is None: + raise TypeError("cannot convert %r to an address" % ( + self._get_c_name(),)) + else: + raise TypeError("cannot convert %r to %r" % ( + self._get_c_name(), BClass._get_c_name())) + + @classmethod + def _get_size(cls): + return ctypes.sizeof(cls._ctype) + + def _get_size_of_instance(self): + return ctypes.sizeof(self._ctype) + + @classmethod + def _cast_from(cls, source): + raise TypeError("cannot cast to %r" % (cls._get_c_name(),)) + + def _cast_to_integer(self): + return self._convert_to_address(None) + + @classmethod + def _alignment(cls): + return ctypes.alignment(cls._ctype) + + def __iter__(self): + raise TypeError("cdata %r does not support iteration" % ( + self._get_c_name()),) + + def _make_cmp(name): + cmpfunc = getattr(operator, name) + def cmp(self, other): + v_is_ptr = not isinstance(self, CTypesGenericPrimitive) + w_is_ptr = (isinstance(other, CTypesData) and + not isinstance(other, CTypesGenericPrimitive)) + if v_is_ptr and w_is_ptr: + return cmpfunc(self._convert_to_address(None), + other._convert_to_address(None)) + elif v_is_ptr or w_is_ptr: + return NotImplemented + else: + if isinstance(self, CTypesGenericPrimitive): + self = self._value + if isinstance(other, CTypesGenericPrimitive): + other = other._value + return cmpfunc(self, other) + cmp.func_name = name + return cmp + + __eq__ = _make_cmp('__eq__') + __ne__ = _make_cmp('__ne__') + __lt__ = _make_cmp('__lt__') + __le__ = _make_cmp('__le__') + __gt__ = _make_cmp('__gt__') + __ge__ = _make_cmp('__ge__') + + def __hash__(self): + return hash(self._convert_to_address(None)) + + def _to_string(self, maxlen): + raise TypeError("string(): %r" % (self,)) + + +class CTypesGenericPrimitive(CTypesData): + __slots__ = [] + + def __hash__(self): + return hash(self._value) + + def _get_own_repr(self): + return repr(self._from_ctypes(self._value)) + + +class CTypesGenericArray(CTypesData): + __slots__ = [] + + @classmethod + def _newp(cls, init): + return cls(init) + + def __iter__(self): + for i in xrange(len(self)): + yield self[i] + + def _get_own_repr(self): + return self._addr_repr(ctypes.addressof(self._blob)) + + +class CTypesGenericPtr(CTypesData): + __slots__ = ['_address', '_as_ctype_ptr'] + _automatic_casts = False + kind = "pointer" + + @classmethod + def _newp(cls, init): + return cls(init) + + @classmethod + def _cast_from(cls, source): + if source is None: + address = 0 + elif isinstance(source, CTypesData): + address = source._cast_to_integer() + elif isinstance(source, (int, long)): + address = source + else: + raise TypeError("bad type for cast to %r: %r" % + (cls, type(source).__name__)) + return cls._new_pointer_at(address) + + @classmethod + def _new_pointer_at(cls, address): + self = cls.__new__(cls) + self._address = address + self._as_ctype_ptr = ctypes.cast(address, cls._ctype) + return self + + def _get_own_repr(self): + try: + return self._addr_repr(self._address) + except AttributeError: + return '???' + + def _cast_to_integer(self): + return self._address + + def __nonzero__(self): + return bool(self._address) + __bool__ = __nonzero__ + + @classmethod + def _to_ctypes(cls, value): + if not isinstance(value, CTypesData): + raise TypeError("unexpected %s object" % type(value).__name__) + address = value._convert_to_address(cls) + return ctypes.cast(address, cls._ctype) + + @classmethod + def _from_ctypes(cls, ctypes_ptr): + address = ctypes.cast(ctypes_ptr, ctypes.c_void_p).value or 0 + return cls._new_pointer_at(address) + + @classmethod + def _initialize(cls, ctypes_ptr, value): + if value: + ctypes_ptr.contents = cls._to_ctypes(value).contents + + def _convert_to_address(self, BClass): + if (BClass in (self.__class__, None) or BClass._automatic_casts + or self._automatic_casts): + return self._address + else: + return CTypesData._convert_to_address(self, BClass) + + +class CTypesBaseStructOrUnion(CTypesData): + __slots__ = ['_blob'] + + @classmethod + def _create_ctype_obj(cls, init): + # may be overridden + raise TypeError("cannot instantiate opaque type %s" % (cls,)) + + def _get_own_repr(self): + return self._addr_repr(ctypes.addressof(self._blob)) + + @classmethod + def _offsetof(cls, fieldname): + return getattr(cls._ctype, fieldname).offset + + def _convert_to_address(self, BClass): + if getattr(BClass, '_BItem', None) is self.__class__: + return ctypes.addressof(self._blob) + else: + return CTypesData._convert_to_address(self, BClass) + + @classmethod + def _from_ctypes(cls, ctypes_struct_or_union): + self = cls.__new__(cls) + self._blob = ctypes_struct_or_union + return self + + @classmethod + def _to_ctypes(cls, value): + return value._blob + + def __repr__(self, c_name=None): + return CTypesData.__repr__(self, c_name or self._get_c_name(' &')) + + +class CTypesBackend(object): + + PRIMITIVE_TYPES = { + 'char': ctypes.c_char, + 'short': ctypes.c_short, + 'int': ctypes.c_int, + 'long': ctypes.c_long, + 'long long': ctypes.c_longlong, + 'signed char': ctypes.c_byte, + 'unsigned char': ctypes.c_ubyte, + 'unsigned short': ctypes.c_ushort, + 'unsigned int': ctypes.c_uint, + 'unsigned long': ctypes.c_ulong, + 'unsigned long long': ctypes.c_ulonglong, + 'float': ctypes.c_float, + 'double': ctypes.c_double, + '_Bool': ctypes.c_bool, + } + + for _name in ['unsigned long long', 'unsigned long', + 'unsigned int', 'unsigned short', 'unsigned char']: + _size = ctypes.sizeof(PRIMITIVE_TYPES[_name]) + PRIMITIVE_TYPES['uint%d_t' % (8*_size)] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_void_p): + PRIMITIVE_TYPES['uintptr_t'] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_size_t): + PRIMITIVE_TYPES['size_t'] = PRIMITIVE_TYPES[_name] + + for _name in ['long long', 'long', 'int', 'short', 'signed char']: + _size = ctypes.sizeof(PRIMITIVE_TYPES[_name]) + PRIMITIVE_TYPES['int%d_t' % (8*_size)] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_void_p): + PRIMITIVE_TYPES['intptr_t'] = PRIMITIVE_TYPES[_name] + PRIMITIVE_TYPES['ptrdiff_t'] = PRIMITIVE_TYPES[_name] + if _size == ctypes.sizeof(ctypes.c_size_t): + PRIMITIVE_TYPES['ssize_t'] = PRIMITIVE_TYPES[_name] + + + def __init__(self): + self.RTLD_LAZY = 0 # not supported anyway by ctypes + self.RTLD_NOW = 0 + self.RTLD_GLOBAL = ctypes.RTLD_GLOBAL + self.RTLD_LOCAL = ctypes.RTLD_LOCAL + + def set_ffi(self, ffi): + self.ffi = ffi + + def _get_types(self): + return CTypesData, CTypesType + + def load_library(self, path, flags=0): + cdll = ctypes.CDLL(path, flags) + return CTypesLibrary(self, cdll) + + def new_void_type(self): + class CTypesVoid(CTypesData): + __slots__ = [] + _reftypename = 'void &' + @staticmethod + def _from_ctypes(novalue): + return None + @staticmethod + def _to_ctypes(novalue): + if novalue is not None: + raise TypeError("None expected, got %s object" % + (type(novalue).__name__,)) + return None + CTypesVoid._fix_class() + return CTypesVoid + + def new_primitive_type(self, name): + if name == 'wchar_t': + raise NotImplementedError(name) + ctype = self.PRIMITIVE_TYPES[name] + if name == 'char': + kind = 'char' + elif name in ('float', 'double'): + kind = 'float' + else: + if name in ('signed char', 'unsigned char'): + kind = 'byte' + elif name == '_Bool': + kind = 'bool' + else: + kind = 'int' + is_signed = (ctype(-1).value == -1) + # + def _cast_source_to_int(source): + if isinstance(source, (int, long, float)): + source = int(source) + elif isinstance(source, CTypesData): + source = source._cast_to_integer() + elif isinstance(source, bytes): + source = ord(source) + elif source is None: + source = 0 + else: + raise TypeError("bad type for cast to %r: %r" % + (CTypesPrimitive, type(source).__name__)) + return source + # + kind1 = kind + class CTypesPrimitive(CTypesGenericPrimitive): + __slots__ = ['_value'] + _ctype = ctype + _reftypename = '%s &' % name + kind = kind1 + + def __init__(self, value): + self._value = value + + @staticmethod + def _create_ctype_obj(init): + if init is None: + return ctype() + return ctype(CTypesPrimitive._to_ctypes(init)) + + if kind == 'int' or kind == 'byte': + @classmethod + def _cast_from(cls, source): + source = _cast_source_to_int(source) + source = ctype(source).value # cast within range + return cls(source) + def __int__(self): + return self._value + + if kind == 'bool': + @classmethod + def _cast_from(cls, source): + if not isinstance(source, (int, long, float)): + source = _cast_source_to_int(source) + return cls(bool(source)) + def __int__(self): + return int(self._value) + + if kind == 'char': + @classmethod + def _cast_from(cls, source): + source = _cast_source_to_int(source) + source = bytechr(source & 0xFF) + return cls(source) + def __int__(self): + return ord(self._value) + + if kind == 'float': + @classmethod + def _cast_from(cls, source): + if isinstance(source, float): + pass + elif isinstance(source, CTypesGenericPrimitive): + if hasattr(source, '__float__'): + source = float(source) + else: + source = int(source) + else: + source = _cast_source_to_int(source) + source = ctype(source).value # fix precision + return cls(source) + def __int__(self): + return int(self._value) + def __float__(self): + return self._value + + _cast_to_integer = __int__ + + if kind == 'int' or kind == 'byte' or kind == 'bool': + @staticmethod + def _to_ctypes(x): + if not isinstance(x, (int, long)): + if isinstance(x, CTypesData): + x = int(x) + else: + raise TypeError("integer expected, got %s" % + type(x).__name__) + if ctype(x).value != x: + if not is_signed and x < 0: + raise OverflowError("%s: negative integer" % name) + else: + raise OverflowError("%s: integer out of bounds" + % name) + return x + + if kind == 'char': + @staticmethod + def _to_ctypes(x): + if isinstance(x, bytes) and len(x) == 1: + return x + if isinstance(x, CTypesPrimitive): # > + return x._value + raise TypeError("character expected, got %s" % + type(x).__name__) + def __nonzero__(self): + return ord(self._value) != 0 + else: + def __nonzero__(self): + return self._value != 0 + __bool__ = __nonzero__ + + if kind == 'float': + @staticmethod + def _to_ctypes(x): + if not isinstance(x, (int, long, float, CTypesData)): + raise TypeError("float expected, got %s" % + type(x).__name__) + return ctype(x).value + + @staticmethod + def _from_ctypes(value): + return getattr(value, 'value', value) + + @staticmethod + def _initialize(blob, init): + blob.value = CTypesPrimitive._to_ctypes(init) + + if kind == 'char': + def _to_string(self, maxlen): + return self._value + if kind == 'byte': + def _to_string(self, maxlen): + return chr(self._value & 0xff) + # + CTypesPrimitive._fix_class() + return CTypesPrimitive + + def new_pointer_type(self, BItem): + getbtype = self.ffi._get_cached_btype + if BItem is getbtype(model.PrimitiveType('char')): + kind = 'charp' + elif BItem in (getbtype(model.PrimitiveType('signed char')), + getbtype(model.PrimitiveType('unsigned char'))): + kind = 'bytep' + elif BItem is getbtype(model.void_type): + kind = 'voidp' + else: + kind = 'generic' + # + class CTypesPtr(CTypesGenericPtr): + __slots__ = ['_own'] + if kind == 'charp': + __slots__ += ['__as_strbuf'] + _BItem = BItem + if hasattr(BItem, '_ctype'): + _ctype = ctypes.POINTER(BItem._ctype) + _bitem_size = ctypes.sizeof(BItem._ctype) + else: + _ctype = ctypes.c_void_p + if issubclass(BItem, CTypesGenericArray): + _reftypename = BItem._get_c_name('(* &)') + else: + _reftypename = BItem._get_c_name(' * &') + + def __init__(self, init): + ctypeobj = BItem._create_ctype_obj(init) + if kind == 'charp': + self.__as_strbuf = ctypes.create_string_buffer( + ctypeobj.value + b'\x00') + self._as_ctype_ptr = ctypes.cast( + self.__as_strbuf, self._ctype) + else: + self._as_ctype_ptr = ctypes.pointer(ctypeobj) + self._address = ctypes.cast(self._as_ctype_ptr, + ctypes.c_void_p).value + self._own = True + + def __add__(self, other): + if isinstance(other, (int, long)): + return self._new_pointer_at(self._address + + other * self._bitem_size) + else: + return NotImplemented + + def __sub__(self, other): + if isinstance(other, (int, long)): + return self._new_pointer_at(self._address - + other * self._bitem_size) + elif type(self) is type(other): + return (self._address - other._address) // self._bitem_size + else: + return NotImplemented + + def __getitem__(self, index): + if getattr(self, '_own', False) and index != 0: + raise IndexError + return BItem._from_ctypes(self._as_ctype_ptr[index]) + + def __setitem__(self, index, value): + self._as_ctype_ptr[index] = BItem._to_ctypes(value) + + if kind == 'charp' or kind == 'voidp': + @classmethod + def _arg_to_ctypes(cls, *value): + if value and isinstance(value[0], bytes): + return ctypes.c_char_p(value[0]) + else: + return super(CTypesPtr, cls)._arg_to_ctypes(*value) + + if kind == 'charp' or kind == 'bytep': + def _to_string(self, maxlen): + if maxlen < 0: + maxlen = sys.maxsize + p = ctypes.cast(self._as_ctype_ptr, + ctypes.POINTER(ctypes.c_char)) + n = 0 + while n < maxlen and p[n] != b'\x00': + n += 1 + return b''.join([p[i] for i in range(n)]) + + def _get_own_repr(self): + if getattr(self, '_own', False): + return 'owning %d bytes' % ( + ctypes.sizeof(self._as_ctype_ptr.contents),) + return super(CTypesPtr, self)._get_own_repr() + # + if (BItem is self.ffi._get_cached_btype(model.void_type) or + BItem is self.ffi._get_cached_btype(model.PrimitiveType('char'))): + CTypesPtr._automatic_casts = True + # + CTypesPtr._fix_class() + return CTypesPtr + + def new_array_type(self, CTypesPtr, length): + if length is None: + brackets = ' &[]' + else: + brackets = ' &[%d]' % length + BItem = CTypesPtr._BItem + getbtype = self.ffi._get_cached_btype + if BItem is getbtype(model.PrimitiveType('char')): + kind = 'char' + elif BItem in (getbtype(model.PrimitiveType('signed char')), + getbtype(model.PrimitiveType('unsigned char'))): + kind = 'byte' + else: + kind = 'generic' + # + class CTypesArray(CTypesGenericArray): + __slots__ = ['_blob', '_own'] + if length is not None: + _ctype = BItem._ctype * length + else: + __slots__.append('_ctype') + _reftypename = BItem._get_c_name(brackets) + _declared_length = length + _CTPtr = CTypesPtr + + def __init__(self, init): + if length is None: + if isinstance(init, (int, long)): + len1 = init + init = None + elif kind == 'char' and isinstance(init, bytes): + len1 = len(init) + 1 # extra null + else: + init = tuple(init) + len1 = len(init) + self._ctype = BItem._ctype * len1 + self._blob = self._ctype() + self._own = True + if init is not None: + self._initialize(self._blob, init) + + @staticmethod + def _initialize(blob, init): + if isinstance(init, bytes): + init = [init[i:i+1] for i in range(len(init))] + else: + if isinstance(init, CTypesGenericArray): + if (len(init) != len(blob) or + not isinstance(init, CTypesArray)): + raise TypeError("length/type mismatch: %s" % (init,)) + init = tuple(init) + if len(init) > len(blob): + raise IndexError("too many initializers") + addr = ctypes.cast(blob, ctypes.c_void_p).value + PTR = ctypes.POINTER(BItem._ctype) + itemsize = ctypes.sizeof(BItem._ctype) + for i, value in enumerate(init): + p = ctypes.cast(addr + i * itemsize, PTR) + BItem._initialize(p.contents, value) + + def __len__(self): + return len(self._blob) + + def __getitem__(self, index): + if not (0 <= index < len(self._blob)): + raise IndexError + return BItem._from_ctypes(self._blob[index]) + + def __setitem__(self, index, value): + if not (0 <= index < len(self._blob)): + raise IndexError + self._blob[index] = BItem._to_ctypes(value) + + if kind == 'char' or kind == 'byte': + def _to_string(self, maxlen): + if maxlen < 0: + maxlen = len(self._blob) + p = ctypes.cast(self._blob, + ctypes.POINTER(ctypes.c_char)) + n = 0 + while n < maxlen and p[n] != b'\x00': + n += 1 + return b''.join([p[i] for i in range(n)]) + + def _get_own_repr(self): + if getattr(self, '_own', False): + return 'owning %d bytes' % (ctypes.sizeof(self._blob),) + return super(CTypesArray, self)._get_own_repr() + + def _convert_to_address(self, BClass): + if BClass in (CTypesPtr, None) or BClass._automatic_casts: + return ctypes.addressof(self._blob) + else: + return CTypesData._convert_to_address(self, BClass) + + @staticmethod + def _from_ctypes(ctypes_array): + self = CTypesArray.__new__(CTypesArray) + self._blob = ctypes_array + return self + + @staticmethod + def _arg_to_ctypes(value): + return CTypesPtr._arg_to_ctypes(value) + + def __add__(self, other): + if isinstance(other, (int, long)): + return CTypesPtr._new_pointer_at( + ctypes.addressof(self._blob) + + other * ctypes.sizeof(BItem._ctype)) + else: + return NotImplemented + + @classmethod + def _cast_from(cls, source): + raise NotImplementedError("casting to %r" % ( + cls._get_c_name(),)) + # + CTypesArray._fix_class() + return CTypesArray + + def _new_struct_or_union(self, kind, name, base_ctypes_class): + # + class struct_or_union(base_ctypes_class): + pass + struct_or_union.__name__ = '%s_%s' % (kind, name) + kind1 = kind + # + class CTypesStructOrUnion(CTypesBaseStructOrUnion): + __slots__ = ['_blob'] + _ctype = struct_or_union + _reftypename = '%s &' % (name,) + _kind = kind = kind1 + # + CTypesStructOrUnion._fix_class() + return CTypesStructOrUnion + + def new_struct_type(self, name): + return self._new_struct_or_union('struct', name, ctypes.Structure) + + def new_union_type(self, name): + return self._new_struct_or_union('union', name, ctypes.Union) + + def complete_struct_or_union(self, CTypesStructOrUnion, fields, tp, + totalsize=-1, totalalignment=-1, sflags=0, + pack=0): + if totalsize >= 0 or totalalignment >= 0: + raise NotImplementedError("the ctypes backend of CFFI does not support " + "structures completed by verify(); please " + "compile and install the _cffi_backend module.") + struct_or_union = CTypesStructOrUnion._ctype + fnames = [fname for (fname, BField, bitsize) in fields] + btypes = [BField for (fname, BField, bitsize) in fields] + bitfields = [bitsize for (fname, BField, bitsize) in fields] + # + bfield_types = {} + cfields = [] + for (fname, BField, bitsize) in fields: + if bitsize < 0: + cfields.append((fname, BField._ctype)) + bfield_types[fname] = BField + else: + cfields.append((fname, BField._ctype, bitsize)) + bfield_types[fname] = Ellipsis + if sflags & 8: + struct_or_union._pack_ = 1 + elif pack: + struct_or_union._pack_ = pack + struct_or_union._fields_ = cfields + CTypesStructOrUnion._bfield_types = bfield_types + # + @staticmethod + def _create_ctype_obj(init): + result = struct_or_union() + if init is not None: + initialize(result, init) + return result + CTypesStructOrUnion._create_ctype_obj = _create_ctype_obj + # + def initialize(blob, init): + if is_union: + if len(init) > 1: + raise ValueError("union initializer: %d items given, but " + "only one supported (use a dict if needed)" + % (len(init),)) + if not isinstance(init, dict): + if isinstance(init, (bytes, unicode)): + raise TypeError("union initializer: got a str") + init = tuple(init) + if len(init) > len(fnames): + raise ValueError("too many values for %s initializer" % + CTypesStructOrUnion._get_c_name()) + init = dict(zip(fnames, init)) + addr = ctypes.addressof(blob) + for fname, value in init.items(): + BField, bitsize = name2fieldtype[fname] + assert bitsize < 0, \ + "not implemented: initializer with bit fields" + offset = CTypesStructOrUnion._offsetof(fname) + PTR = ctypes.POINTER(BField._ctype) + p = ctypes.cast(addr + offset, PTR) + BField._initialize(p.contents, value) + is_union = CTypesStructOrUnion._kind == 'union' + name2fieldtype = dict(zip(fnames, zip(btypes, bitfields))) + # + for fname, BField, bitsize in fields: + if fname == '': + raise NotImplementedError("nested anonymous structs/unions") + if hasattr(CTypesStructOrUnion, fname): + raise ValueError("the field name %r conflicts in " + "the ctypes backend" % fname) + if bitsize < 0: + def getter(self, fname=fname, BField=BField, + offset=CTypesStructOrUnion._offsetof(fname), + PTR=ctypes.POINTER(BField._ctype)): + addr = ctypes.addressof(self._blob) + p = ctypes.cast(addr + offset, PTR) + return BField._from_ctypes(p.contents) + def setter(self, value, fname=fname, BField=BField): + setattr(self._blob, fname, BField._to_ctypes(value)) + # + if issubclass(BField, CTypesGenericArray): + setter = None + if BField._declared_length == 0: + def getter(self, fname=fname, BFieldPtr=BField._CTPtr, + offset=CTypesStructOrUnion._offsetof(fname), + PTR=ctypes.POINTER(BField._ctype)): + addr = ctypes.addressof(self._blob) + p = ctypes.cast(addr + offset, PTR) + return BFieldPtr._from_ctypes(p) + # + else: + def getter(self, fname=fname, BField=BField): + return BField._from_ctypes(getattr(self._blob, fname)) + def setter(self, value, fname=fname, BField=BField): + # xxx obscure workaround + value = BField._to_ctypes(value) + oldvalue = getattr(self._blob, fname) + setattr(self._blob, fname, value) + if value != getattr(self._blob, fname): + setattr(self._blob, fname, oldvalue) + raise OverflowError("value too large for bitfield") + setattr(CTypesStructOrUnion, fname, property(getter, setter)) + # + CTypesPtr = self.ffi._get_cached_btype(model.PointerType(tp)) + for fname in fnames: + if hasattr(CTypesPtr, fname): + raise ValueError("the field name %r conflicts in " + "the ctypes backend" % fname) + def getter(self, fname=fname): + return getattr(self[0], fname) + def setter(self, value, fname=fname): + setattr(self[0], fname, value) + setattr(CTypesPtr, fname, property(getter, setter)) + + def new_function_type(self, BArgs, BResult, has_varargs): + nameargs = [BArg._get_c_name() for BArg in BArgs] + if has_varargs: + nameargs.append('...') + nameargs = ', '.join(nameargs) + # + class CTypesFunctionPtr(CTypesGenericPtr): + __slots__ = ['_own_callback', '_name'] + _ctype = ctypes.CFUNCTYPE(getattr(BResult, '_ctype', None), + *[BArg._ctype for BArg in BArgs], + use_errno=True) + _reftypename = BResult._get_c_name('(* &)(%s)' % (nameargs,)) + + def __init__(self, init, error=None): + # create a callback to the Python callable init() + import traceback + assert not has_varargs, "varargs not supported for callbacks" + if getattr(BResult, '_ctype', None) is not None: + error = BResult._from_ctypes( + BResult._create_ctype_obj(error)) + else: + error = None + def callback(*args): + args2 = [] + for arg, BArg in zip(args, BArgs): + args2.append(BArg._from_ctypes(arg)) + try: + res2 = init(*args2) + res2 = BResult._to_ctypes(res2) + except: + traceback.print_exc() + res2 = error + if issubclass(BResult, CTypesGenericPtr): + if res2: + res2 = ctypes.cast(res2, ctypes.c_void_p).value + # .value: http://bugs.python.org/issue1574593 + else: + res2 = None + #print repr(res2) + return res2 + if issubclass(BResult, CTypesGenericPtr): + # The only pointers callbacks can return are void*s: + # http://bugs.python.org/issue5710 + callback_ctype = ctypes.CFUNCTYPE( + ctypes.c_void_p, + *[BArg._ctype for BArg in BArgs], + use_errno=True) + else: + callback_ctype = CTypesFunctionPtr._ctype + self._as_ctype_ptr = callback_ctype(callback) + self._address = ctypes.cast(self._as_ctype_ptr, + ctypes.c_void_p).value + self._own_callback = init + + @staticmethod + def _initialize(ctypes_ptr, value): + if value: + raise NotImplementedError("ctypes backend: not supported: " + "initializers for function pointers") + + def __repr__(self): + c_name = getattr(self, '_name', None) + if c_name: + i = self._reftypename.index('(* &)') + if self._reftypename[i-1] not in ' )*': + c_name = ' ' + c_name + c_name = self._reftypename.replace('(* &)', c_name) + return CTypesData.__repr__(self, c_name) + + def _get_own_repr(self): + if getattr(self, '_own_callback', None) is not None: + return 'calling %r' % (self._own_callback,) + return super(CTypesFunctionPtr, self)._get_own_repr() + + def __call__(self, *args): + if has_varargs: + assert len(args) >= len(BArgs) + extraargs = args[len(BArgs):] + args = args[:len(BArgs)] + else: + assert len(args) == len(BArgs) + ctypes_args = [] + for arg, BArg in zip(args, BArgs): + ctypes_args.append(BArg._arg_to_ctypes(arg)) + if has_varargs: + for i, arg in enumerate(extraargs): + if arg is None: + ctypes_args.append(ctypes.c_void_p(0)) # NULL + continue + if not isinstance(arg, CTypesData): + raise TypeError( + "argument %d passed in the variadic part " + "needs to be a cdata object (got %s)" % + (1 + len(BArgs) + i, type(arg).__name__)) + ctypes_args.append(arg._arg_to_ctypes(arg)) + result = self._as_ctype_ptr(*ctypes_args) + return BResult._from_ctypes(result) + # + CTypesFunctionPtr._fix_class() + return CTypesFunctionPtr + + def new_enum_type(self, name, enumerators, enumvalues, CTypesInt): + assert isinstance(name, str) + reverse_mapping = dict(zip(reversed(enumvalues), + reversed(enumerators))) + # + class CTypesEnum(CTypesInt): + __slots__ = [] + _reftypename = '%s &' % name + + def _get_own_repr(self): + value = self._value + try: + return '%d: %s' % (value, reverse_mapping[value]) + except KeyError: + return str(value) + + def _to_string(self, maxlen): + value = self._value + try: + return reverse_mapping[value] + except KeyError: + return str(value) + # + CTypesEnum._fix_class() + return CTypesEnum + + def get_errno(self): + return ctypes.get_errno() + + def set_errno(self, value): + ctypes.set_errno(value) + + def string(self, b, maxlen=-1): + return b._to_string(maxlen) + + def buffer(self, bptr, size=-1): + raise NotImplementedError("buffer() with ctypes backend") + + def sizeof(self, cdata_or_BType): + if isinstance(cdata_or_BType, CTypesData): + return cdata_or_BType._get_size_of_instance() + else: + assert issubclass(cdata_or_BType, CTypesData) + return cdata_or_BType._get_size() + + def alignof(self, BType): + assert issubclass(BType, CTypesData) + return BType._alignment() + + def newp(self, BType, source): + if not issubclass(BType, CTypesData): + raise TypeError + return BType._newp(source) + + def cast(self, BType, source): + return BType._cast_from(source) + + def callback(self, BType, source, error, onerror): + assert onerror is None # XXX not implemented + return BType(source, error) + + _weakref_cache_ref = None + + def gcp(self, cdata, destructor, size=0): + if self._weakref_cache_ref is None: + import weakref + class MyRef(weakref.ref): + def __eq__(self, other): + myref = self() + return self is other or ( + myref is not None and myref is other()) + def __ne__(self, other): + return not (self == other) + def __hash__(self): + try: + return self._hash + except AttributeError: + self._hash = hash(self()) + return self._hash + self._weakref_cache_ref = {}, MyRef + weak_cache, MyRef = self._weakref_cache_ref + + if destructor is None: + try: + del weak_cache[MyRef(cdata)] + except KeyError: + raise TypeError("Can remove destructor only on a object " + "previously returned by ffi.gc()") + return None + + def remove(k): + cdata, destructor = weak_cache.pop(k, (None, None)) + if destructor is not None: + destructor(cdata) + + new_cdata = self.cast(self.typeof(cdata), cdata) + assert new_cdata is not cdata + weak_cache[MyRef(new_cdata, remove)] = (cdata, destructor) + return new_cdata + + typeof = type + + def getcname(self, BType, replace_with): + return BType._get_c_name(replace_with) + + def typeoffsetof(self, BType, fieldname, num=0): + if isinstance(fieldname, str): + if num == 0 and issubclass(BType, CTypesGenericPtr): + BType = BType._BItem + if not issubclass(BType, CTypesBaseStructOrUnion): + raise TypeError("expected a struct or union ctype") + BField = BType._bfield_types[fieldname] + if BField is Ellipsis: + raise TypeError("not supported for bitfields") + return (BField, BType._offsetof(fieldname)) + elif isinstance(fieldname, (int, long)): + if issubclass(BType, CTypesGenericArray): + BType = BType._CTPtr + if not issubclass(BType, CTypesGenericPtr): + raise TypeError("expected an array or ptr ctype") + BItem = BType._BItem + offset = BItem._get_size() * fieldname + if offset > sys.maxsize: + raise OverflowError + return (BItem, offset) + else: + raise TypeError(type(fieldname)) + + def rawaddressof(self, BTypePtr, cdata, offset=None): + if isinstance(cdata, CTypesBaseStructOrUnion): + ptr = ctypes.pointer(type(cdata)._to_ctypes(cdata)) + elif isinstance(cdata, CTypesGenericPtr): + if offset is None or not issubclass(type(cdata)._BItem, + CTypesBaseStructOrUnion): + raise TypeError("unexpected cdata type") + ptr = type(cdata)._to_ctypes(cdata) + elif isinstance(cdata, CTypesGenericArray): + ptr = type(cdata)._to_ctypes(cdata) + else: + raise TypeError("expected a ") + if offset: + ptr = ctypes.cast( + ctypes.c_void_p( + ctypes.cast(ptr, ctypes.c_void_p).value + offset), + type(ptr)) + return BTypePtr._from_ctypes(ptr) + + +class CTypesLibrary(object): + + def __init__(self, backend, cdll): + self.backend = backend + self.cdll = cdll + + def load_function(self, BType, name): + c_func = getattr(self.cdll, name) + funcobj = BType._from_ctypes(c_func) + funcobj._name = name + return funcobj + + def read_variable(self, BType, name): + try: + ctypes_obj = BType._ctype.in_dll(self.cdll, name) + except AttributeError as e: + raise NotImplementedError(e) + return BType._from_ctypes(ctypes_obj) + + def write_variable(self, BType, name, value): + new_ctypes_obj = BType._to_ctypes(value) + ctypes_obj = BType._ctype.in_dll(self.cdll, name) + ctypes.memmove(ctypes.addressof(ctypes_obj), + ctypes.addressof(new_ctypes_obj), + ctypes.sizeof(BType._ctype)) diff --git a/templates/skills/file_manager/dependencies/cffi/cffi_opcode.py b/templates/skills/file_manager/dependencies/cffi/cffi_opcode.py new file mode 100644 index 00000000..6421df62 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/cffi_opcode.py @@ -0,0 +1,187 @@ +from .error import VerificationError + +class CffiOp(object): + def __init__(self, op, arg): + self.op = op + self.arg = arg + + def as_c_expr(self): + if self.op is None: + assert isinstance(self.arg, str) + return '(_cffi_opcode_t)(%s)' % (self.arg,) + classname = CLASS_NAME[self.op] + return '_CFFI_OP(_CFFI_OP_%s, %s)' % (classname, self.arg) + + def as_python_bytes(self): + if self.op is None and self.arg.isdigit(): + value = int(self.arg) # non-negative: '-' not in self.arg + if value >= 2**31: + raise OverflowError("cannot emit %r: limited to 2**31-1" + % (self.arg,)) + return format_four_bytes(value) + if isinstance(self.arg, str): + raise VerificationError("cannot emit to Python: %r" % (self.arg,)) + return format_four_bytes((self.arg << 8) | self.op) + + def __str__(self): + classname = CLASS_NAME.get(self.op, self.op) + return '(%s %s)' % (classname, self.arg) + +def format_four_bytes(num): + return '\\x%02X\\x%02X\\x%02X\\x%02X' % ( + (num >> 24) & 0xFF, + (num >> 16) & 0xFF, + (num >> 8) & 0xFF, + (num ) & 0xFF) + +OP_PRIMITIVE = 1 +OP_POINTER = 3 +OP_ARRAY = 5 +OP_OPEN_ARRAY = 7 +OP_STRUCT_UNION = 9 +OP_ENUM = 11 +OP_FUNCTION = 13 +OP_FUNCTION_END = 15 +OP_NOOP = 17 +OP_BITFIELD = 19 +OP_TYPENAME = 21 +OP_CPYTHON_BLTN_V = 23 # varargs +OP_CPYTHON_BLTN_N = 25 # noargs +OP_CPYTHON_BLTN_O = 27 # O (i.e. a single arg) +OP_CONSTANT = 29 +OP_CONSTANT_INT = 31 +OP_GLOBAL_VAR = 33 +OP_DLOPEN_FUNC = 35 +OP_DLOPEN_CONST = 37 +OP_GLOBAL_VAR_F = 39 +OP_EXTERN_PYTHON = 41 + +PRIM_VOID = 0 +PRIM_BOOL = 1 +PRIM_CHAR = 2 +PRIM_SCHAR = 3 +PRIM_UCHAR = 4 +PRIM_SHORT = 5 +PRIM_USHORT = 6 +PRIM_INT = 7 +PRIM_UINT = 8 +PRIM_LONG = 9 +PRIM_ULONG = 10 +PRIM_LONGLONG = 11 +PRIM_ULONGLONG = 12 +PRIM_FLOAT = 13 +PRIM_DOUBLE = 14 +PRIM_LONGDOUBLE = 15 + +PRIM_WCHAR = 16 +PRIM_INT8 = 17 +PRIM_UINT8 = 18 +PRIM_INT16 = 19 +PRIM_UINT16 = 20 +PRIM_INT32 = 21 +PRIM_UINT32 = 22 +PRIM_INT64 = 23 +PRIM_UINT64 = 24 +PRIM_INTPTR = 25 +PRIM_UINTPTR = 26 +PRIM_PTRDIFF = 27 +PRIM_SIZE = 28 +PRIM_SSIZE = 29 +PRIM_INT_LEAST8 = 30 +PRIM_UINT_LEAST8 = 31 +PRIM_INT_LEAST16 = 32 +PRIM_UINT_LEAST16 = 33 +PRIM_INT_LEAST32 = 34 +PRIM_UINT_LEAST32 = 35 +PRIM_INT_LEAST64 = 36 +PRIM_UINT_LEAST64 = 37 +PRIM_INT_FAST8 = 38 +PRIM_UINT_FAST8 = 39 +PRIM_INT_FAST16 = 40 +PRIM_UINT_FAST16 = 41 +PRIM_INT_FAST32 = 42 +PRIM_UINT_FAST32 = 43 +PRIM_INT_FAST64 = 44 +PRIM_UINT_FAST64 = 45 +PRIM_INTMAX = 46 +PRIM_UINTMAX = 47 +PRIM_FLOATCOMPLEX = 48 +PRIM_DOUBLECOMPLEX = 49 +PRIM_CHAR16 = 50 +PRIM_CHAR32 = 51 + +_NUM_PRIM = 52 +_UNKNOWN_PRIM = -1 +_UNKNOWN_FLOAT_PRIM = -2 +_UNKNOWN_LONG_DOUBLE = -3 + +_IO_FILE_STRUCT = -1 + +PRIMITIVE_TO_INDEX = { + 'char': PRIM_CHAR, + 'short': PRIM_SHORT, + 'int': PRIM_INT, + 'long': PRIM_LONG, + 'long long': PRIM_LONGLONG, + 'signed char': PRIM_SCHAR, + 'unsigned char': PRIM_UCHAR, + 'unsigned short': PRIM_USHORT, + 'unsigned int': PRIM_UINT, + 'unsigned long': PRIM_ULONG, + 'unsigned long long': PRIM_ULONGLONG, + 'float': PRIM_FLOAT, + 'double': PRIM_DOUBLE, + 'long double': PRIM_LONGDOUBLE, + '_cffi_float_complex_t': PRIM_FLOATCOMPLEX, + '_cffi_double_complex_t': PRIM_DOUBLECOMPLEX, + '_Bool': PRIM_BOOL, + 'wchar_t': PRIM_WCHAR, + 'char16_t': PRIM_CHAR16, + 'char32_t': PRIM_CHAR32, + 'int8_t': PRIM_INT8, + 'uint8_t': PRIM_UINT8, + 'int16_t': PRIM_INT16, + 'uint16_t': PRIM_UINT16, + 'int32_t': PRIM_INT32, + 'uint32_t': PRIM_UINT32, + 'int64_t': PRIM_INT64, + 'uint64_t': PRIM_UINT64, + 'intptr_t': PRIM_INTPTR, + 'uintptr_t': PRIM_UINTPTR, + 'ptrdiff_t': PRIM_PTRDIFF, + 'size_t': PRIM_SIZE, + 'ssize_t': PRIM_SSIZE, + 'int_least8_t': PRIM_INT_LEAST8, + 'uint_least8_t': PRIM_UINT_LEAST8, + 'int_least16_t': PRIM_INT_LEAST16, + 'uint_least16_t': PRIM_UINT_LEAST16, + 'int_least32_t': PRIM_INT_LEAST32, + 'uint_least32_t': PRIM_UINT_LEAST32, + 'int_least64_t': PRIM_INT_LEAST64, + 'uint_least64_t': PRIM_UINT_LEAST64, + 'int_fast8_t': PRIM_INT_FAST8, + 'uint_fast8_t': PRIM_UINT_FAST8, + 'int_fast16_t': PRIM_INT_FAST16, + 'uint_fast16_t': PRIM_UINT_FAST16, + 'int_fast32_t': PRIM_INT_FAST32, + 'uint_fast32_t': PRIM_UINT_FAST32, + 'int_fast64_t': PRIM_INT_FAST64, + 'uint_fast64_t': PRIM_UINT_FAST64, + 'intmax_t': PRIM_INTMAX, + 'uintmax_t': PRIM_UINTMAX, + } + +F_UNION = 0x01 +F_CHECK_FIELDS = 0x02 +F_PACKED = 0x04 +F_EXTERNAL = 0x08 +F_OPAQUE = 0x10 + +G_FLAGS = dict([('_CFFI_' + _key, globals()[_key]) + for _key in ['F_UNION', 'F_CHECK_FIELDS', 'F_PACKED', + 'F_EXTERNAL', 'F_OPAQUE']]) + +CLASS_NAME = {} +for _name, _value in list(globals().items()): + if _name.startswith('OP_') and isinstance(_value, int): + CLASS_NAME[_value] = _name[3:] diff --git a/templates/skills/file_manager/dependencies/cffi/commontypes.py b/templates/skills/file_manager/dependencies/cffi/commontypes.py new file mode 100644 index 00000000..d4dae351 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/commontypes.py @@ -0,0 +1,82 @@ +import sys +from . import model +from .error import FFIError + + +COMMON_TYPES = {} + +try: + # fetch "bool" and all simple Windows types + from _cffi_backend import _get_common_types + _get_common_types(COMMON_TYPES) +except ImportError: + pass + +COMMON_TYPES['FILE'] = model.unknown_type('FILE', '_IO_FILE') +COMMON_TYPES['bool'] = '_Bool' # in case we got ImportError above +COMMON_TYPES['float _Complex'] = '_cffi_float_complex_t' +COMMON_TYPES['double _Complex'] = '_cffi_double_complex_t' + +for _type in model.PrimitiveType.ALL_PRIMITIVE_TYPES: + if _type.endswith('_t'): + COMMON_TYPES[_type] = _type +del _type + +_CACHE = {} + +def resolve_common_type(parser, commontype): + try: + return _CACHE[commontype] + except KeyError: + cdecl = COMMON_TYPES.get(commontype, commontype) + if not isinstance(cdecl, str): + result, quals = cdecl, 0 # cdecl is already a BaseType + elif cdecl in model.PrimitiveType.ALL_PRIMITIVE_TYPES: + result, quals = model.PrimitiveType(cdecl), 0 + elif cdecl == 'set-unicode-needed': + raise FFIError("The Windows type %r is only available after " + "you call ffi.set_unicode()" % (commontype,)) + else: + if commontype == cdecl: + raise FFIError( + "Unsupported type: %r. Please look at " + "http://cffi.readthedocs.io/en/latest/cdef.html#ffi-cdef-limitations " + "and file an issue if you think this type should really " + "be supported." % (commontype,)) + result, quals = parser.parse_type_and_quals(cdecl) # recursive + + assert isinstance(result, model.BaseTypeByIdentity) + _CACHE[commontype] = result, quals + return result, quals + + +# ____________________________________________________________ +# extra types for Windows (most of them are in commontypes.c) + + +def win_common_types(): + return { + "UNICODE_STRING": model.StructType( + "_UNICODE_STRING", + ["Length", + "MaximumLength", + "Buffer"], + [model.PrimitiveType("unsigned short"), + model.PrimitiveType("unsigned short"), + model.PointerType(model.PrimitiveType("wchar_t"))], + [-1, -1, -1]), + "PUNICODE_STRING": "UNICODE_STRING *", + "PCUNICODE_STRING": "const UNICODE_STRING *", + + "TBYTE": "set-unicode-needed", + "TCHAR": "set-unicode-needed", + "LPCTSTR": "set-unicode-needed", + "PCTSTR": "set-unicode-needed", + "LPTSTR": "set-unicode-needed", + "PTSTR": "set-unicode-needed", + "PTBYTE": "set-unicode-needed", + "PTCHAR": "set-unicode-needed", + } + +if sys.platform == 'win32': + COMMON_TYPES.update(win_common_types()) diff --git a/templates/skills/file_manager/dependencies/cffi/cparser.py b/templates/skills/file_manager/dependencies/cffi/cparser.py new file mode 100644 index 00000000..eee83caf --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/cparser.py @@ -0,0 +1,1015 @@ +from . import model +from .commontypes import COMMON_TYPES, resolve_common_type +from .error import FFIError, CDefError +try: + from . import _pycparser as pycparser +except ImportError: + import pycparser +import weakref, re, sys + +try: + if sys.version_info < (3,): + import thread as _thread + else: + import _thread + lock = _thread.allocate_lock() +except ImportError: + lock = None + +def _workaround_for_static_import_finders(): + # Issue #392: packaging tools like cx_Freeze can not find these + # because pycparser uses exec dynamic import. This is an obscure + # workaround. This function is never called. + import pycparser.yacctab + import pycparser.lextab + +CDEF_SOURCE_STRING = "" +_r_comment = re.compile(r"/\*.*?\*/|//([^\n\\]|\\.)*?$", + re.DOTALL | re.MULTILINE) +_r_define = re.compile(r"^\s*#\s*define\s+([A-Za-z_][A-Za-z_0-9]*)" + r"\b((?:[^\n\\]|\\.)*?)$", + re.DOTALL | re.MULTILINE) +_r_line_directive = re.compile(r"^[ \t]*#[ \t]*(?:line|\d+)\b.*$", re.MULTILINE) +_r_partial_enum = re.compile(r"=\s*\.\.\.\s*[,}]|\.\.\.\s*\}") +_r_enum_dotdotdot = re.compile(r"__dotdotdot\d+__$") +_r_partial_array = re.compile(r"\[\s*\.\.\.\s*\]") +_r_words = re.compile(r"\w+|\S") +_parser_cache = None +_r_int_literal = re.compile(r"-?0?x?[0-9a-f]+[lu]*$", re.IGNORECASE) +_r_stdcall1 = re.compile(r"\b(__stdcall|WINAPI)\b") +_r_stdcall2 = re.compile(r"[(]\s*(__stdcall|WINAPI)\b") +_r_cdecl = re.compile(r"\b__cdecl\b") +_r_extern_python = re.compile(r'\bextern\s*"' + r'(Python|Python\s*\+\s*C|C\s*\+\s*Python)"\s*.') +_r_star_const_space = re.compile( # matches "* const " + r"[*]\s*((const|volatile|restrict)\b\s*)+") +_r_int_dotdotdot = re.compile(r"(\b(int|long|short|signed|unsigned|char)\s*)+" + r"\.\.\.") +_r_float_dotdotdot = re.compile(r"\b(double|float)\s*\.\.\.") + +def _get_parser(): + global _parser_cache + if _parser_cache is None: + _parser_cache = pycparser.CParser() + return _parser_cache + +def _workaround_for_old_pycparser(csource): + # Workaround for a pycparser issue (fixed between pycparser 2.10 and + # 2.14): "char*const***" gives us a wrong syntax tree, the same as + # for "char***(*const)". This means we can't tell the difference + # afterwards. But "char(*const(***))" gives us the right syntax + # tree. The issue only occurs if there are several stars in + # sequence with no parenthesis inbetween, just possibly qualifiers. + # Attempt to fix it by adding some parentheses in the source: each + # time we see "* const" or "* const *", we add an opening + # parenthesis before each star---the hard part is figuring out where + # to close them. + parts = [] + while True: + match = _r_star_const_space.search(csource) + if not match: + break + #print repr(''.join(parts)+csource), '=>', + parts.append(csource[:match.start()]) + parts.append('('); closing = ')' + parts.append(match.group()) # e.g. "* const " + endpos = match.end() + if csource.startswith('*', endpos): + parts.append('('); closing += ')' + level = 0 + i = endpos + while i < len(csource): + c = csource[i] + if c == '(': + level += 1 + elif c == ')': + if level == 0: + break + level -= 1 + elif c in ',;=': + if level == 0: + break + i += 1 + csource = csource[endpos:i] + closing + csource[i:] + #print repr(''.join(parts)+csource) + parts.append(csource) + return ''.join(parts) + +def _preprocess_extern_python(csource): + # input: `extern "Python" int foo(int);` or + # `extern "Python" { int foo(int); }` + # output: + # void __cffi_extern_python_start; + # int foo(int); + # void __cffi_extern_python_stop; + # + # input: `extern "Python+C" int foo(int);` + # output: + # void __cffi_extern_python_plus_c_start; + # int foo(int); + # void __cffi_extern_python_stop; + parts = [] + while True: + match = _r_extern_python.search(csource) + if not match: + break + endpos = match.end() - 1 + #print + #print ''.join(parts)+csource + #print '=>' + parts.append(csource[:match.start()]) + if 'C' in match.group(1): + parts.append('void __cffi_extern_python_plus_c_start; ') + else: + parts.append('void __cffi_extern_python_start; ') + if csource[endpos] == '{': + # grouping variant + closing = csource.find('}', endpos) + if closing < 0: + raise CDefError("'extern \"Python\" {': no '}' found") + if csource.find('{', endpos + 1, closing) >= 0: + raise NotImplementedError("cannot use { } inside a block " + "'extern \"Python\" { ... }'") + parts.append(csource[endpos+1:closing]) + csource = csource[closing+1:] + else: + # non-grouping variant + semicolon = csource.find(';', endpos) + if semicolon < 0: + raise CDefError("'extern \"Python\": no ';' found") + parts.append(csource[endpos:semicolon+1]) + csource = csource[semicolon+1:] + parts.append(' void __cffi_extern_python_stop;') + #print ''.join(parts)+csource + #print + parts.append(csource) + return ''.join(parts) + +def _warn_for_string_literal(csource): + if '"' not in csource: + return + for line in csource.splitlines(): + if '"' in line and not line.lstrip().startswith('#'): + import warnings + warnings.warn("String literal found in cdef() or type source. " + "String literals are ignored here, but you should " + "remove them anyway because some character sequences " + "confuse pre-parsing.") + break + +def _warn_for_non_extern_non_static_global_variable(decl): + if not decl.storage: + import warnings + warnings.warn("Global variable '%s' in cdef(): for consistency " + "with C it should have a storage class specifier " + "(usually 'extern')" % (decl.name,)) + +def _remove_line_directives(csource): + # _r_line_directive matches whole lines, without the final \n, if they + # start with '#line' with some spacing allowed, or '#NUMBER'. This + # function stores them away and replaces them with exactly the string + # '#line@N', where N is the index in the list 'line_directives'. + line_directives = [] + def replace(m): + i = len(line_directives) + line_directives.append(m.group()) + return '#line@%d' % i + csource = _r_line_directive.sub(replace, csource) + return csource, line_directives + +def _put_back_line_directives(csource, line_directives): + def replace(m): + s = m.group() + if not s.startswith('#line@'): + raise AssertionError("unexpected #line directive " + "(should have been processed and removed") + return line_directives[int(s[6:])] + return _r_line_directive.sub(replace, csource) + +def _preprocess(csource): + # First, remove the lines of the form '#line N "filename"' because + # the "filename" part could confuse the rest + csource, line_directives = _remove_line_directives(csource) + # Remove comments. NOTE: this only work because the cdef() section + # should not contain any string literals (except in line directives)! + def replace_keeping_newlines(m): + return ' ' + m.group().count('\n') * '\n' + csource = _r_comment.sub(replace_keeping_newlines, csource) + # Remove the "#define FOO x" lines + macros = {} + for match in _r_define.finditer(csource): + macroname, macrovalue = match.groups() + macrovalue = macrovalue.replace('\\\n', '').strip() + macros[macroname] = macrovalue + csource = _r_define.sub('', csource) + # + if pycparser.__version__ < '2.14': + csource = _workaround_for_old_pycparser(csource) + # + # BIG HACK: replace WINAPI or __stdcall with "volatile const". + # It doesn't make sense for the return type of a function to be + # "volatile volatile const", so we abuse it to detect __stdcall... + # Hack number 2 is that "int(volatile *fptr)();" is not valid C + # syntax, so we place the "volatile" before the opening parenthesis. + csource = _r_stdcall2.sub(' volatile volatile const(', csource) + csource = _r_stdcall1.sub(' volatile volatile const ', csource) + csource = _r_cdecl.sub(' ', csource) + # + # Replace `extern "Python"` with start/end markers + csource = _preprocess_extern_python(csource) + # + # Now there should not be any string literal left; warn if we get one + _warn_for_string_literal(csource) + # + # Replace "[...]" with "[__dotdotdotarray__]" + csource = _r_partial_array.sub('[__dotdotdotarray__]', csource) + # + # Replace "...}" with "__dotdotdotNUM__}". This construction should + # occur only at the end of enums; at the end of structs we have "...;}" + # and at the end of vararg functions "...);". Also replace "=...[,}]" + # with ",__dotdotdotNUM__[,}]": this occurs in the enums too, when + # giving an unknown value. + matches = list(_r_partial_enum.finditer(csource)) + for number, match in enumerate(reversed(matches)): + p = match.start() + if csource[p] == '=': + p2 = csource.find('...', p, match.end()) + assert p2 > p + csource = '%s,__dotdotdot%d__ %s' % (csource[:p], number, + csource[p2+3:]) + else: + assert csource[p:p+3] == '...' + csource = '%s __dotdotdot%d__ %s' % (csource[:p], number, + csource[p+3:]) + # Replace "int ..." or "unsigned long int..." with "__dotdotdotint__" + csource = _r_int_dotdotdot.sub(' __dotdotdotint__ ', csource) + # Replace "float ..." or "double..." with "__dotdotdotfloat__" + csource = _r_float_dotdotdot.sub(' __dotdotdotfloat__ ', csource) + # Replace all remaining "..." with the same name, "__dotdotdot__", + # which is declared with a typedef for the purpose of C parsing. + csource = csource.replace('...', ' __dotdotdot__ ') + # Finally, put back the line directives + csource = _put_back_line_directives(csource, line_directives) + return csource, macros + +def _common_type_names(csource): + # Look in the source for what looks like usages of types from the + # list of common types. A "usage" is approximated here as the + # appearance of the word, minus a "definition" of the type, which + # is the last word in a "typedef" statement. Approximative only + # but should be fine for all the common types. + look_for_words = set(COMMON_TYPES) + look_for_words.add(';') + look_for_words.add(',') + look_for_words.add('(') + look_for_words.add(')') + look_for_words.add('typedef') + words_used = set() + is_typedef = False + paren = 0 + previous_word = '' + for word in _r_words.findall(csource): + if word in look_for_words: + if word == ';': + if is_typedef: + words_used.discard(previous_word) + look_for_words.discard(previous_word) + is_typedef = False + elif word == 'typedef': + is_typedef = True + paren = 0 + elif word == '(': + paren += 1 + elif word == ')': + paren -= 1 + elif word == ',': + if is_typedef and paren == 0: + words_used.discard(previous_word) + look_for_words.discard(previous_word) + else: # word in COMMON_TYPES + words_used.add(word) + previous_word = word + return words_used + + +class Parser(object): + + def __init__(self): + self._declarations = {} + self._included_declarations = set() + self._anonymous_counter = 0 + self._structnode2type = weakref.WeakKeyDictionary() + self._options = {} + self._int_constants = {} + self._recomplete = [] + self._uses_new_feature = None + + def _parse(self, csource): + csource, macros = _preprocess(csource) + # XXX: for more efficiency we would need to poke into the + # internals of CParser... the following registers the + # typedefs, because their presence or absence influences the + # parsing itself (but what they are typedef'ed to plays no role) + ctn = _common_type_names(csource) + typenames = [] + for name in sorted(self._declarations): + if name.startswith('typedef '): + name = name[8:] + typenames.append(name) + ctn.discard(name) + typenames += sorted(ctn) + # + csourcelines = [] + csourcelines.append('# 1 ""') + for typename in typenames: + csourcelines.append('typedef int %s;' % typename) + csourcelines.append('typedef int __dotdotdotint__, __dotdotdotfloat__,' + ' __dotdotdot__;') + # this forces pycparser to consider the following in the file + # called from line 1 + csourcelines.append('# 1 "%s"' % (CDEF_SOURCE_STRING,)) + csourcelines.append(csource) + csourcelines.append('') # see test_missing_newline_bug + fullcsource = '\n'.join(csourcelines) + if lock is not None: + lock.acquire() # pycparser is not thread-safe... + try: + ast = _get_parser().parse(fullcsource) + except pycparser.c_parser.ParseError as e: + self.convert_pycparser_error(e, csource) + finally: + if lock is not None: + lock.release() + # csource will be used to find buggy source text + return ast, macros, csource + + def _convert_pycparser_error(self, e, csource): + # xxx look for ":NUM:" at the start of str(e) + # and interpret that as a line number. This will not work if + # the user gives explicit ``# NUM "FILE"`` directives. + line = None + msg = str(e) + match = re.match(r"%s:(\d+):" % (CDEF_SOURCE_STRING,), msg) + if match: + linenum = int(match.group(1), 10) + csourcelines = csource.splitlines() + if 1 <= linenum <= len(csourcelines): + line = csourcelines[linenum-1] + return line + + def convert_pycparser_error(self, e, csource): + line = self._convert_pycparser_error(e, csource) + + msg = str(e) + if line: + msg = 'cannot parse "%s"\n%s' % (line.strip(), msg) + else: + msg = 'parse error\n%s' % (msg,) + raise CDefError(msg) + + def parse(self, csource, override=False, packed=False, pack=None, + dllexport=False): + if packed: + if packed != True: + raise ValueError("'packed' should be False or True; use " + "'pack' to give another value") + if pack: + raise ValueError("cannot give both 'pack' and 'packed'") + pack = 1 + elif pack: + if pack & (pack - 1): + raise ValueError("'pack' must be a power of two, not %r" % + (pack,)) + else: + pack = 0 + prev_options = self._options + try: + self._options = {'override': override, + 'packed': pack, + 'dllexport': dllexport} + self._internal_parse(csource) + finally: + self._options = prev_options + + def _internal_parse(self, csource): + ast, macros, csource = self._parse(csource) + # add the macros + self._process_macros(macros) + # find the first "__dotdotdot__" and use that as a separator + # between the repeated typedefs and the real csource + iterator = iter(ast.ext) + for decl in iterator: + if decl.name == '__dotdotdot__': + break + else: + assert 0 + current_decl = None + # + try: + self._inside_extern_python = '__cffi_extern_python_stop' + for decl in iterator: + current_decl = decl + if isinstance(decl, pycparser.c_ast.Decl): + self._parse_decl(decl) + elif isinstance(decl, pycparser.c_ast.Typedef): + if not decl.name: + raise CDefError("typedef does not declare any name", + decl) + quals = 0 + if (isinstance(decl.type.type, pycparser.c_ast.IdentifierType) and + decl.type.type.names[-1].startswith('__dotdotdot')): + realtype = self._get_unknown_type(decl) + elif (isinstance(decl.type, pycparser.c_ast.PtrDecl) and + isinstance(decl.type.type, pycparser.c_ast.TypeDecl) and + isinstance(decl.type.type.type, + pycparser.c_ast.IdentifierType) and + decl.type.type.type.names[-1].startswith('__dotdotdot')): + realtype = self._get_unknown_ptr_type(decl) + else: + realtype, quals = self._get_type_and_quals( + decl.type, name=decl.name, partial_length_ok=True, + typedef_example="*(%s *)0" % (decl.name,)) + self._declare('typedef ' + decl.name, realtype, quals=quals) + elif decl.__class__.__name__ == 'Pragma': + # skip pragma, only in pycparser 2.15 + import warnings + warnings.warn( + "#pragma in cdef() are entirely ignored. " + "They should be removed for now, otherwise your " + "code might behave differently in a future version " + "of CFFI if #pragma support gets added. Note that " + "'#pragma pack' needs to be replaced with the " + "'packed' keyword argument to cdef().") + else: + raise CDefError("unexpected <%s>: this construct is valid " + "C but not valid in cdef()" % + decl.__class__.__name__, decl) + except CDefError as e: + if len(e.args) == 1: + e.args = e.args + (current_decl,) + raise + except FFIError as e: + msg = self._convert_pycparser_error(e, csource) + if msg: + e.args = (e.args[0] + "\n *** Err: %s" % msg,) + raise + + def _add_constants(self, key, val): + if key in self._int_constants: + if self._int_constants[key] == val: + return # ignore identical double declarations + raise FFIError( + "multiple declarations of constant: %s" % (key,)) + self._int_constants[key] = val + + def _add_integer_constant(self, name, int_str): + int_str = int_str.lower().rstrip("ul") + neg = int_str.startswith('-') + if neg: + int_str = int_str[1:] + # "010" is not valid oct in py3 + if (int_str.startswith("0") and int_str != '0' + and not int_str.startswith("0x")): + int_str = "0o" + int_str[1:] + pyvalue = int(int_str, 0) + if neg: + pyvalue = -pyvalue + self._add_constants(name, pyvalue) + self._declare('macro ' + name, pyvalue) + + def _process_macros(self, macros): + for key, value in macros.items(): + value = value.strip() + if _r_int_literal.match(value): + self._add_integer_constant(key, value) + elif value == '...': + self._declare('macro ' + key, value) + else: + raise CDefError( + 'only supports one of the following syntax:\n' + ' #define %s ... (literally dot-dot-dot)\n' + ' #define %s NUMBER (with NUMBER an integer' + ' constant, decimal/hex/octal)\n' + 'got:\n' + ' #define %s %s' + % (key, key, key, value)) + + def _declare_function(self, tp, quals, decl): + tp = self._get_type_pointer(tp, quals) + if self._options.get('dllexport'): + tag = 'dllexport_python ' + elif self._inside_extern_python == '__cffi_extern_python_start': + tag = 'extern_python ' + elif self._inside_extern_python == '__cffi_extern_python_plus_c_start': + tag = 'extern_python_plus_c ' + else: + tag = 'function ' + self._declare(tag + decl.name, tp) + + def _parse_decl(self, decl): + node = decl.type + if isinstance(node, pycparser.c_ast.FuncDecl): + tp, quals = self._get_type_and_quals(node, name=decl.name) + assert isinstance(tp, model.RawFunctionType) + self._declare_function(tp, quals, decl) + else: + if isinstance(node, pycparser.c_ast.Struct): + self._get_struct_union_enum_type('struct', node) + elif isinstance(node, pycparser.c_ast.Union): + self._get_struct_union_enum_type('union', node) + elif isinstance(node, pycparser.c_ast.Enum): + self._get_struct_union_enum_type('enum', node) + elif not decl.name: + raise CDefError("construct does not declare any variable", + decl) + # + if decl.name: + tp, quals = self._get_type_and_quals(node, + partial_length_ok=True) + if tp.is_raw_function: + self._declare_function(tp, quals, decl) + elif (tp.is_integer_type() and + hasattr(decl, 'init') and + hasattr(decl.init, 'value') and + _r_int_literal.match(decl.init.value)): + self._add_integer_constant(decl.name, decl.init.value) + elif (tp.is_integer_type() and + isinstance(decl.init, pycparser.c_ast.UnaryOp) and + decl.init.op == '-' and + hasattr(decl.init.expr, 'value') and + _r_int_literal.match(decl.init.expr.value)): + self._add_integer_constant(decl.name, + '-' + decl.init.expr.value) + elif (tp is model.void_type and + decl.name.startswith('__cffi_extern_python_')): + # hack: `extern "Python"` in the C source is replaced + # with "void __cffi_extern_python_start;" and + # "void __cffi_extern_python_stop;" + self._inside_extern_python = decl.name + else: + if self._inside_extern_python !='__cffi_extern_python_stop': + raise CDefError( + "cannot declare constants or " + "variables with 'extern \"Python\"'") + if (quals & model.Q_CONST) and not tp.is_array_type: + self._declare('constant ' + decl.name, tp, quals=quals) + else: + _warn_for_non_extern_non_static_global_variable(decl) + self._declare('variable ' + decl.name, tp, quals=quals) + + def parse_type(self, cdecl): + return self.parse_type_and_quals(cdecl)[0] + + def parse_type_and_quals(self, cdecl): + ast, macros = self._parse('void __dummy(\n%s\n);' % cdecl)[:2] + assert not macros + exprnode = ast.ext[-1].type.args.params[0] + if isinstance(exprnode, pycparser.c_ast.ID): + raise CDefError("unknown identifier '%s'" % (exprnode.name,)) + return self._get_type_and_quals(exprnode.type) + + def _declare(self, name, obj, included=False, quals=0): + if name in self._declarations: + prevobj, prevquals = self._declarations[name] + if prevobj is obj and prevquals == quals: + return + if not self._options.get('override'): + raise FFIError( + "multiple declarations of %s (for interactive usage, " + "try cdef(xx, override=True))" % (name,)) + assert '__dotdotdot__' not in name.split() + self._declarations[name] = (obj, quals) + if included: + self._included_declarations.add(obj) + + def _extract_quals(self, type): + quals = 0 + if isinstance(type, (pycparser.c_ast.TypeDecl, + pycparser.c_ast.PtrDecl)): + if 'const' in type.quals: + quals |= model.Q_CONST + if 'volatile' in type.quals: + quals |= model.Q_VOLATILE + if 'restrict' in type.quals: + quals |= model.Q_RESTRICT + return quals + + def _get_type_pointer(self, type, quals, declname=None): + if isinstance(type, model.RawFunctionType): + return type.as_function_pointer() + if (isinstance(type, model.StructOrUnionOrEnum) and + type.name.startswith('$') and type.name[1:].isdigit() and + type.forcename is None and declname is not None): + return model.NamedPointerType(type, declname, quals) + return model.PointerType(type, quals) + + def _get_type_and_quals(self, typenode, name=None, partial_length_ok=False, + typedef_example=None): + # first, dereference typedefs, if we have it already parsed, we're good + if (isinstance(typenode, pycparser.c_ast.TypeDecl) and + isinstance(typenode.type, pycparser.c_ast.IdentifierType) and + len(typenode.type.names) == 1 and + ('typedef ' + typenode.type.names[0]) in self._declarations): + tp, quals = self._declarations['typedef ' + typenode.type.names[0]] + quals |= self._extract_quals(typenode) + return tp, quals + # + if isinstance(typenode, pycparser.c_ast.ArrayDecl): + # array type + if typenode.dim is None: + length = None + else: + length = self._parse_constant( + typenode.dim, partial_length_ok=partial_length_ok) + # a hack: in 'typedef int foo_t[...][...];', don't use '...' as + # the length but use directly the C expression that would be + # generated by recompiler.py. This lets the typedef be used in + # many more places within recompiler.py + if typedef_example is not None: + if length == '...': + length = '_cffi_array_len(%s)' % (typedef_example,) + typedef_example = "*" + typedef_example + # + tp, quals = self._get_type_and_quals(typenode.type, + partial_length_ok=partial_length_ok, + typedef_example=typedef_example) + return model.ArrayType(tp, length), quals + # + if isinstance(typenode, pycparser.c_ast.PtrDecl): + # pointer type + itemtype, itemquals = self._get_type_and_quals(typenode.type) + tp = self._get_type_pointer(itemtype, itemquals, declname=name) + quals = self._extract_quals(typenode) + return tp, quals + # + if isinstance(typenode, pycparser.c_ast.TypeDecl): + quals = self._extract_quals(typenode) + type = typenode.type + if isinstance(type, pycparser.c_ast.IdentifierType): + # assume a primitive type. get it from .names, but reduce + # synonyms to a single chosen combination + names = list(type.names) + if names != ['signed', 'char']: # keep this unmodified + prefixes = {} + while names: + name = names[0] + if name in ('short', 'long', 'signed', 'unsigned'): + prefixes[name] = prefixes.get(name, 0) + 1 + del names[0] + else: + break + # ignore the 'signed' prefix below, and reorder the others + newnames = [] + for prefix in ('unsigned', 'short', 'long'): + for i in range(prefixes.get(prefix, 0)): + newnames.append(prefix) + if not names: + names = ['int'] # implicitly + if names == ['int']: # but kill it if 'short' or 'long' + if 'short' in prefixes or 'long' in prefixes: + names = [] + names = newnames + names + ident = ' '.join(names) + if ident == 'void': + return model.void_type, quals + if ident == '__dotdotdot__': + raise FFIError(':%d: bad usage of "..."' % + typenode.coord.line) + tp0, quals0 = resolve_common_type(self, ident) + return tp0, (quals | quals0) + # + if isinstance(type, pycparser.c_ast.Struct): + # 'struct foobar' + tp = self._get_struct_union_enum_type('struct', type, name) + return tp, quals + # + if isinstance(type, pycparser.c_ast.Union): + # 'union foobar' + tp = self._get_struct_union_enum_type('union', type, name) + return tp, quals + # + if isinstance(type, pycparser.c_ast.Enum): + # 'enum foobar' + tp = self._get_struct_union_enum_type('enum', type, name) + return tp, quals + # + if isinstance(typenode, pycparser.c_ast.FuncDecl): + # a function type + return self._parse_function_type(typenode, name), 0 + # + # nested anonymous structs or unions end up here + if isinstance(typenode, pycparser.c_ast.Struct): + return self._get_struct_union_enum_type('struct', typenode, name, + nested=True), 0 + if isinstance(typenode, pycparser.c_ast.Union): + return self._get_struct_union_enum_type('union', typenode, name, + nested=True), 0 + # + raise FFIError(":%d: bad or unsupported type declaration" % + typenode.coord.line) + + def _parse_function_type(self, typenode, funcname=None): + params = list(getattr(typenode.args, 'params', [])) + for i, arg in enumerate(params): + if not hasattr(arg, 'type'): + raise CDefError("%s arg %d: unknown type '%s'" + " (if you meant to use the old C syntax of giving" + " untyped arguments, it is not supported)" + % (funcname or 'in expression', i + 1, + getattr(arg, 'name', '?'))) + ellipsis = ( + len(params) > 0 and + isinstance(params[-1].type, pycparser.c_ast.TypeDecl) and + isinstance(params[-1].type.type, + pycparser.c_ast.IdentifierType) and + params[-1].type.type.names == ['__dotdotdot__']) + if ellipsis: + params.pop() + if not params: + raise CDefError( + "%s: a function with only '(...)' as argument" + " is not correct C" % (funcname or 'in expression')) + args = [self._as_func_arg(*self._get_type_and_quals(argdeclnode.type)) + for argdeclnode in params] + if not ellipsis and args == [model.void_type]: + args = [] + result, quals = self._get_type_and_quals(typenode.type) + # the 'quals' on the result type are ignored. HACK: we absure them + # to detect __stdcall functions: we textually replace "__stdcall" + # with "volatile volatile const" above. + abi = None + if hasattr(typenode.type, 'quals'): # else, probable syntax error anyway + if typenode.type.quals[-3:] == ['volatile', 'volatile', 'const']: + abi = '__stdcall' + return model.RawFunctionType(tuple(args), result, ellipsis, abi) + + def _as_func_arg(self, type, quals): + if isinstance(type, model.ArrayType): + return model.PointerType(type.item, quals) + elif isinstance(type, model.RawFunctionType): + return type.as_function_pointer() + else: + return type + + def _get_struct_union_enum_type(self, kind, type, name=None, nested=False): + # First, a level of caching on the exact 'type' node of the AST. + # This is obscure, but needed because pycparser "unrolls" declarations + # such as "typedef struct { } foo_t, *foo_p" and we end up with + # an AST that is not a tree, but a DAG, with the "type" node of the + # two branches foo_t and foo_p of the trees being the same node. + # It's a bit silly but detecting "DAG-ness" in the AST tree seems + # to be the only way to distinguish this case from two independent + # structs. See test_struct_with_two_usages. + try: + return self._structnode2type[type] + except KeyError: + pass + # + # Note that this must handle parsing "struct foo" any number of + # times and always return the same StructType object. Additionally, + # one of these times (not necessarily the first), the fields of + # the struct can be specified with "struct foo { ...fields... }". + # If no name is given, then we have to create a new anonymous struct + # with no caching; in this case, the fields are either specified + # right now or never. + # + force_name = name + name = type.name + # + # get the type or create it if needed + if name is None: + # 'force_name' is used to guess a more readable name for + # anonymous structs, for the common case "typedef struct { } foo". + if force_name is not None: + explicit_name = '$%s' % force_name + else: + self._anonymous_counter += 1 + explicit_name = '$%d' % self._anonymous_counter + tp = None + else: + explicit_name = name + key = '%s %s' % (kind, name) + tp, _ = self._declarations.get(key, (None, None)) + # + if tp is None: + if kind == 'struct': + tp = model.StructType(explicit_name, None, None, None) + elif kind == 'union': + tp = model.UnionType(explicit_name, None, None, None) + elif kind == 'enum': + if explicit_name == '__dotdotdot__': + raise CDefError("Enums cannot be declared with ...") + tp = self._build_enum_type(explicit_name, type.values) + else: + raise AssertionError("kind = %r" % (kind,)) + if name is not None: + self._declare(key, tp) + else: + if kind == 'enum' and type.values is not None: + raise NotImplementedError( + "enum %s: the '{}' declaration should appear on the first " + "time the enum is mentioned, not later" % explicit_name) + if not tp.forcename: + tp.force_the_name(force_name) + if tp.forcename and '$' in tp.name: + self._declare('anonymous %s' % tp.forcename, tp) + # + self._structnode2type[type] = tp + # + # enums: done here + if kind == 'enum': + return tp + # + # is there a 'type.decls'? If yes, then this is the place in the + # C sources that declare the fields. If no, then just return the + # existing type, possibly still incomplete. + if type.decls is None: + return tp + # + if tp.fldnames is not None: + raise CDefError("duplicate declaration of struct %s" % name) + fldnames = [] + fldtypes = [] + fldbitsize = [] + fldquals = [] + for decl in type.decls: + if (isinstance(decl.type, pycparser.c_ast.IdentifierType) and + ''.join(decl.type.names) == '__dotdotdot__'): + # XXX pycparser is inconsistent: 'names' should be a list + # of strings, but is sometimes just one string. Use + # str.join() as a way to cope with both. + self._make_partial(tp, nested) + continue + if decl.bitsize is None: + bitsize = -1 + else: + bitsize = self._parse_constant(decl.bitsize) + self._partial_length = False + type, fqual = self._get_type_and_quals(decl.type, + partial_length_ok=True) + if self._partial_length: + self._make_partial(tp, nested) + if isinstance(type, model.StructType) and type.partial: + self._make_partial(tp, nested) + fldnames.append(decl.name or '') + fldtypes.append(type) + fldbitsize.append(bitsize) + fldquals.append(fqual) + tp.fldnames = tuple(fldnames) + tp.fldtypes = tuple(fldtypes) + tp.fldbitsize = tuple(fldbitsize) + tp.fldquals = tuple(fldquals) + if fldbitsize != [-1] * len(fldbitsize): + if isinstance(tp, model.StructType) and tp.partial: + raise NotImplementedError("%s: using both bitfields and '...;'" + % (tp,)) + tp.packed = self._options.get('packed') + if tp.completed: # must be re-completed: it is not opaque any more + tp.completed = 0 + self._recomplete.append(tp) + return tp + + def _make_partial(self, tp, nested): + if not isinstance(tp, model.StructOrUnion): + raise CDefError("%s cannot be partial" % (tp,)) + if not tp.has_c_name() and not nested: + raise NotImplementedError("%s is partial but has no C name" %(tp,)) + tp.partial = True + + def _parse_constant(self, exprnode, partial_length_ok=False): + # for now, limited to expressions that are an immediate number + # or positive/negative number + if isinstance(exprnode, pycparser.c_ast.Constant): + s = exprnode.value + if '0' <= s[0] <= '9': + s = s.rstrip('uUlL') + try: + if s.startswith('0'): + return int(s, 8) + else: + return int(s, 10) + except ValueError: + if len(s) > 1: + if s.lower()[0:2] == '0x': + return int(s, 16) + elif s.lower()[0:2] == '0b': + return int(s, 2) + raise CDefError("invalid constant %r" % (s,)) + elif s[0] == "'" and s[-1] == "'" and ( + len(s) == 3 or (len(s) == 4 and s[1] == "\\")): + return ord(s[-2]) + else: + raise CDefError("invalid constant %r" % (s,)) + # + if (isinstance(exprnode, pycparser.c_ast.UnaryOp) and + exprnode.op == '+'): + return self._parse_constant(exprnode.expr) + # + if (isinstance(exprnode, pycparser.c_ast.UnaryOp) and + exprnode.op == '-'): + return -self._parse_constant(exprnode.expr) + # load previously defined int constant + if (isinstance(exprnode, pycparser.c_ast.ID) and + exprnode.name in self._int_constants): + return self._int_constants[exprnode.name] + # + if (isinstance(exprnode, pycparser.c_ast.ID) and + exprnode.name == '__dotdotdotarray__'): + if partial_length_ok: + self._partial_length = True + return '...' + raise FFIError(":%d: unsupported '[...]' here, cannot derive " + "the actual array length in this context" + % exprnode.coord.line) + # + if isinstance(exprnode, pycparser.c_ast.BinaryOp): + left = self._parse_constant(exprnode.left) + right = self._parse_constant(exprnode.right) + if exprnode.op == '+': + return left + right + elif exprnode.op == '-': + return left - right + elif exprnode.op == '*': + return left * right + elif exprnode.op == '/': + return self._c_div(left, right) + elif exprnode.op == '%': + return left - self._c_div(left, right) * right + elif exprnode.op == '<<': + return left << right + elif exprnode.op == '>>': + return left >> right + elif exprnode.op == '&': + return left & right + elif exprnode.op == '|': + return left | right + elif exprnode.op == '^': + return left ^ right + # + raise FFIError(":%d: unsupported expression: expected a " + "simple numeric constant" % exprnode.coord.line) + + def _c_div(self, a, b): + result = a // b + if ((a < 0) ^ (b < 0)) and (a % b) != 0: + result += 1 + return result + + def _build_enum_type(self, explicit_name, decls): + if decls is not None: + partial = False + enumerators = [] + enumvalues = [] + nextenumvalue = 0 + for enum in decls.enumerators: + if _r_enum_dotdotdot.match(enum.name): + partial = True + continue + if enum.value is not None: + nextenumvalue = self._parse_constant(enum.value) + enumerators.append(enum.name) + enumvalues.append(nextenumvalue) + self._add_constants(enum.name, nextenumvalue) + nextenumvalue += 1 + enumerators = tuple(enumerators) + enumvalues = tuple(enumvalues) + tp = model.EnumType(explicit_name, enumerators, enumvalues) + tp.partial = partial + else: # opaque enum + tp = model.EnumType(explicit_name, (), ()) + return tp + + def include(self, other): + for name, (tp, quals) in other._declarations.items(): + if name.startswith('anonymous $enum_$'): + continue # fix for test_anonymous_enum_include + kind = name.split(' ', 1)[0] + if kind in ('struct', 'union', 'enum', 'anonymous', 'typedef'): + self._declare(name, tp, included=True, quals=quals) + for k, v in other._int_constants.items(): + self._add_constants(k, v) + + def _get_unknown_type(self, decl): + typenames = decl.type.type.names + if typenames == ['__dotdotdot__']: + return model.unknown_type(decl.name) + + if typenames == ['__dotdotdotint__']: + if self._uses_new_feature is None: + self._uses_new_feature = "'typedef int... %s'" % decl.name + return model.UnknownIntegerType(decl.name) + + if typenames == ['__dotdotdotfloat__']: + # note: not for 'long double' so far + if self._uses_new_feature is None: + self._uses_new_feature = "'typedef float... %s'" % decl.name + return model.UnknownFloatType(decl.name) + + raise FFIError(':%d: unsupported usage of "..." in typedef' + % decl.coord.line) + + def _get_unknown_ptr_type(self, decl): + if decl.type.type.type.names == ['__dotdotdot__']: + return model.unknown_ptr_type(decl.name) + raise FFIError(':%d: unsupported usage of "..." in typedef' + % decl.coord.line) diff --git a/templates/skills/file_manager/dependencies/cffi/error.py b/templates/skills/file_manager/dependencies/cffi/error.py new file mode 100644 index 00000000..0a27247c --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/error.py @@ -0,0 +1,31 @@ + +class FFIError(Exception): + __module__ = 'cffi' + +class CDefError(Exception): + __module__ = 'cffi' + def __str__(self): + try: + current_decl = self.args[1] + filename = current_decl.coord.file + linenum = current_decl.coord.line + prefix = '%s:%d: ' % (filename, linenum) + except (AttributeError, TypeError, IndexError): + prefix = '' + return '%s%s' % (prefix, self.args[0]) + +class VerificationError(Exception): + """ An error raised when verification fails + """ + __module__ = 'cffi' + +class VerificationMissing(Exception): + """ An error raised when incomplete structures are passed into + cdef, but no verification has been done + """ + __module__ = 'cffi' + +class PkgConfigError(Exception): + """ An error raised for missing modules in pkg-config + """ + __module__ = 'cffi' diff --git a/templates/skills/file_manager/dependencies/cffi/ffiplatform.py b/templates/skills/file_manager/dependencies/cffi/ffiplatform.py new file mode 100644 index 00000000..adca28f1 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/ffiplatform.py @@ -0,0 +1,113 @@ +import sys, os +from .error import VerificationError + + +LIST_OF_FILE_NAMES = ['sources', 'include_dirs', 'library_dirs', + 'extra_objects', 'depends'] + +def get_extension(srcfilename, modname, sources=(), **kwds): + from cffi._shimmed_dist_utils import Extension + allsources = [srcfilename] + for src in sources: + allsources.append(os.path.normpath(src)) + return Extension(name=modname, sources=allsources, **kwds) + +def compile(tmpdir, ext, compiler_verbose=0, debug=None): + """Compile a C extension module using distutils.""" + + saved_environ = os.environ.copy() + try: + outputfilename = _build(tmpdir, ext, compiler_verbose, debug) + outputfilename = os.path.abspath(outputfilename) + finally: + # workaround for a distutils bugs where some env vars can + # become longer and longer every time it is used + for key, value in saved_environ.items(): + if os.environ.get(key) != value: + os.environ[key] = value + return outputfilename + +def _build(tmpdir, ext, compiler_verbose=0, debug=None): + # XXX compact but horrible :-( + from cffi._shimmed_dist_utils import Distribution, CompileError, LinkError, set_threshold, set_verbosity + + dist = Distribution({'ext_modules': [ext]}) + dist.parse_config_files() + options = dist.get_option_dict('build_ext') + if debug is None: + debug = sys.flags.debug + options['debug'] = ('ffiplatform', debug) + options['force'] = ('ffiplatform', True) + options['build_lib'] = ('ffiplatform', tmpdir) + options['build_temp'] = ('ffiplatform', tmpdir) + # + try: + old_level = set_threshold(0) or 0 + try: + set_verbosity(compiler_verbose) + dist.run_command('build_ext') + cmd_obj = dist.get_command_obj('build_ext') + [soname] = cmd_obj.get_outputs() + finally: + set_threshold(old_level) + except (CompileError, LinkError) as e: + raise VerificationError('%s: %s' % (e.__class__.__name__, e)) + # + return soname + +try: + from os.path import samefile +except ImportError: + def samefile(f1, f2): + return os.path.abspath(f1) == os.path.abspath(f2) + +def maybe_relative_path(path): + if not os.path.isabs(path): + return path # already relative + dir = path + names = [] + while True: + prevdir = dir + dir, name = os.path.split(prevdir) + if dir == prevdir or not dir: + return path # failed to make it relative + names.append(name) + try: + if samefile(dir, os.curdir): + names.reverse() + return os.path.join(*names) + except OSError: + pass + +# ____________________________________________________________ + +try: + int_or_long = (int, long) + import cStringIO +except NameError: + int_or_long = int # Python 3 + import io as cStringIO + +def _flatten(x, f): + if isinstance(x, str): + f.write('%ds%s' % (len(x), x)) + elif isinstance(x, dict): + keys = sorted(x.keys()) + f.write('%dd' % len(keys)) + for key in keys: + _flatten(key, f) + _flatten(x[key], f) + elif isinstance(x, (list, tuple)): + f.write('%dl' % len(x)) + for value in x: + _flatten(value, f) + elif isinstance(x, int_or_long): + f.write('%di' % (x,)) + else: + raise TypeError( + "the keywords to verify() contains unsupported object %r" % (x,)) + +def flatten(x): + f = cStringIO.StringIO() + _flatten(x, f) + return f.getvalue() diff --git a/templates/skills/file_manager/dependencies/cffi/lock.py b/templates/skills/file_manager/dependencies/cffi/lock.py new file mode 100644 index 00000000..db91b715 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/lock.py @@ -0,0 +1,30 @@ +import sys + +if sys.version_info < (3,): + try: + from thread import allocate_lock + except ImportError: + from dummy_thread import allocate_lock +else: + try: + from _thread import allocate_lock + except ImportError: + from _dummy_thread import allocate_lock + + +##import sys +##l1 = allocate_lock + +##class allocate_lock(object): +## def __init__(self): +## self._real = l1() +## def __enter__(self): +## for i in range(4, 0, -1): +## print sys._getframe(i).f_code +## print +## return self._real.__enter__() +## def __exit__(self, *args): +## return self._real.__exit__(*args) +## def acquire(self, f): +## assert f is False +## return self._real.acquire(f) diff --git a/templates/skills/file_manager/dependencies/cffi/model.py b/templates/skills/file_manager/dependencies/cffi/model.py new file mode 100644 index 00000000..e5f4cae3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/model.py @@ -0,0 +1,618 @@ +import types +import weakref + +from .lock import allocate_lock +from .error import CDefError, VerificationError, VerificationMissing + +# type qualifiers +Q_CONST = 0x01 +Q_RESTRICT = 0x02 +Q_VOLATILE = 0x04 + +def qualify(quals, replace_with): + if quals & Q_CONST: + replace_with = ' const ' + replace_with.lstrip() + if quals & Q_VOLATILE: + replace_with = ' volatile ' + replace_with.lstrip() + if quals & Q_RESTRICT: + # It seems that __restrict is supported by gcc and msvc. + # If you hit some different compiler, add a #define in + # _cffi_include.h for it (and in its copies, documented there) + replace_with = ' __restrict ' + replace_with.lstrip() + return replace_with + + +class BaseTypeByIdentity(object): + is_array_type = False + is_raw_function = False + + def get_c_name(self, replace_with='', context='a C file', quals=0): + result = self.c_name_with_marker + assert result.count('&') == 1 + # some logic duplication with ffi.getctype()... :-( + replace_with = replace_with.strip() + if replace_with: + if replace_with.startswith('*') and '&[' in result: + replace_with = '(%s)' % replace_with + elif not replace_with[0] in '[(': + replace_with = ' ' + replace_with + replace_with = qualify(quals, replace_with) + result = result.replace('&', replace_with) + if '$' in result: + raise VerificationError( + "cannot generate '%s' in %s: unknown type name" + % (self._get_c_name(), context)) + return result + + def _get_c_name(self): + return self.c_name_with_marker.replace('&', '') + + def has_c_name(self): + return '$' not in self._get_c_name() + + def is_integer_type(self): + return False + + def get_cached_btype(self, ffi, finishlist, can_delay=False): + try: + BType = ffi._cached_btypes[self] + except KeyError: + BType = self.build_backend_type(ffi, finishlist) + BType2 = ffi._cached_btypes.setdefault(self, BType) + assert BType2 is BType + return BType + + def __repr__(self): + return '<%s>' % (self._get_c_name(),) + + def _get_items(self): + return [(name, getattr(self, name)) for name in self._attrs_] + + +class BaseType(BaseTypeByIdentity): + + def __eq__(self, other): + return (self.__class__ == other.__class__ and + self._get_items() == other._get_items()) + + def __ne__(self, other): + return not self == other + + def __hash__(self): + return hash((self.__class__, tuple(self._get_items()))) + + +class VoidType(BaseType): + _attrs_ = () + + def __init__(self): + self.c_name_with_marker = 'void&' + + def build_backend_type(self, ffi, finishlist): + return global_cache(self, ffi, 'new_void_type') + +void_type = VoidType() + + +class BasePrimitiveType(BaseType): + def is_complex_type(self): + return False + + +class PrimitiveType(BasePrimitiveType): + _attrs_ = ('name',) + + ALL_PRIMITIVE_TYPES = { + 'char': 'c', + 'short': 'i', + 'int': 'i', + 'long': 'i', + 'long long': 'i', + 'signed char': 'i', + 'unsigned char': 'i', + 'unsigned short': 'i', + 'unsigned int': 'i', + 'unsigned long': 'i', + 'unsigned long long': 'i', + 'float': 'f', + 'double': 'f', + 'long double': 'f', + '_cffi_float_complex_t': 'j', + '_cffi_double_complex_t': 'j', + '_Bool': 'i', + # the following types are not primitive in the C sense + 'wchar_t': 'c', + 'char16_t': 'c', + 'char32_t': 'c', + 'int8_t': 'i', + 'uint8_t': 'i', + 'int16_t': 'i', + 'uint16_t': 'i', + 'int32_t': 'i', + 'uint32_t': 'i', + 'int64_t': 'i', + 'uint64_t': 'i', + 'int_least8_t': 'i', + 'uint_least8_t': 'i', + 'int_least16_t': 'i', + 'uint_least16_t': 'i', + 'int_least32_t': 'i', + 'uint_least32_t': 'i', + 'int_least64_t': 'i', + 'uint_least64_t': 'i', + 'int_fast8_t': 'i', + 'uint_fast8_t': 'i', + 'int_fast16_t': 'i', + 'uint_fast16_t': 'i', + 'int_fast32_t': 'i', + 'uint_fast32_t': 'i', + 'int_fast64_t': 'i', + 'uint_fast64_t': 'i', + 'intptr_t': 'i', + 'uintptr_t': 'i', + 'intmax_t': 'i', + 'uintmax_t': 'i', + 'ptrdiff_t': 'i', + 'size_t': 'i', + 'ssize_t': 'i', + } + + def __init__(self, name): + assert name in self.ALL_PRIMITIVE_TYPES + self.name = name + self.c_name_with_marker = name + '&' + + def is_char_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'c' + def is_integer_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'i' + def is_float_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'f' + def is_complex_type(self): + return self.ALL_PRIMITIVE_TYPES[self.name] == 'j' + + def build_backend_type(self, ffi, finishlist): + return global_cache(self, ffi, 'new_primitive_type', self.name) + + +class UnknownIntegerType(BasePrimitiveType): + _attrs_ = ('name',) + + def __init__(self, name): + self.name = name + self.c_name_with_marker = name + '&' + + def is_integer_type(self): + return True + + def build_backend_type(self, ffi, finishlist): + raise NotImplementedError("integer type '%s' can only be used after " + "compilation" % self.name) + +class UnknownFloatType(BasePrimitiveType): + _attrs_ = ('name', ) + + def __init__(self, name): + self.name = name + self.c_name_with_marker = name + '&' + + def build_backend_type(self, ffi, finishlist): + raise NotImplementedError("float type '%s' can only be used after " + "compilation" % self.name) + + +class BaseFunctionType(BaseType): + _attrs_ = ('args', 'result', 'ellipsis', 'abi') + + def __init__(self, args, result, ellipsis, abi=None): + self.args = args + self.result = result + self.ellipsis = ellipsis + self.abi = abi + # + reprargs = [arg._get_c_name() for arg in self.args] + if self.ellipsis: + reprargs.append('...') + reprargs = reprargs or ['void'] + replace_with = self._base_pattern % (', '.join(reprargs),) + if abi is not None: + replace_with = replace_with[:1] + abi + ' ' + replace_with[1:] + self.c_name_with_marker = ( + self.result.c_name_with_marker.replace('&', replace_with)) + + +class RawFunctionType(BaseFunctionType): + # Corresponds to a C type like 'int(int)', which is the C type of + # a function, but not a pointer-to-function. The backend has no + # notion of such a type; it's used temporarily by parsing. + _base_pattern = '(&)(%s)' + is_raw_function = True + + def build_backend_type(self, ffi, finishlist): + raise CDefError("cannot render the type %r: it is a function " + "type, not a pointer-to-function type" % (self,)) + + def as_function_pointer(self): + return FunctionPtrType(self.args, self.result, self.ellipsis, self.abi) + + +class FunctionPtrType(BaseFunctionType): + _base_pattern = '(*&)(%s)' + + def build_backend_type(self, ffi, finishlist): + result = self.result.get_cached_btype(ffi, finishlist) + args = [] + for tp in self.args: + args.append(tp.get_cached_btype(ffi, finishlist)) + abi_args = () + if self.abi == "__stdcall": + if not self.ellipsis: # __stdcall ignored for variadic funcs + try: + abi_args = (ffi._backend.FFI_STDCALL,) + except AttributeError: + pass + return global_cache(self, ffi, 'new_function_type', + tuple(args), result, self.ellipsis, *abi_args) + + def as_raw_function(self): + return RawFunctionType(self.args, self.result, self.ellipsis, self.abi) + + +class PointerType(BaseType): + _attrs_ = ('totype', 'quals') + + def __init__(self, totype, quals=0): + self.totype = totype + self.quals = quals + extra = " *&" + if totype.is_array_type: + extra = "(%s)" % (extra.lstrip(),) + extra = qualify(quals, extra) + self.c_name_with_marker = totype.c_name_with_marker.replace('&', extra) + + def build_backend_type(self, ffi, finishlist): + BItem = self.totype.get_cached_btype(ffi, finishlist, can_delay=True) + return global_cache(self, ffi, 'new_pointer_type', BItem) + +voidp_type = PointerType(void_type) + +def ConstPointerType(totype): + return PointerType(totype, Q_CONST) + +const_voidp_type = ConstPointerType(void_type) + + +class NamedPointerType(PointerType): + _attrs_ = ('totype', 'name') + + def __init__(self, totype, name, quals=0): + PointerType.__init__(self, totype, quals) + self.name = name + self.c_name_with_marker = name + '&' + + +class ArrayType(BaseType): + _attrs_ = ('item', 'length') + is_array_type = True + + def __init__(self, item, length): + self.item = item + self.length = length + # + if length is None: + brackets = '&[]' + elif length == '...': + brackets = '&[/*...*/]' + else: + brackets = '&[%s]' % length + self.c_name_with_marker = ( + self.item.c_name_with_marker.replace('&', brackets)) + + def length_is_unknown(self): + return isinstance(self.length, str) + + def resolve_length(self, newlength): + return ArrayType(self.item, newlength) + + def build_backend_type(self, ffi, finishlist): + if self.length_is_unknown(): + raise CDefError("cannot render the type %r: unknown length" % + (self,)) + self.item.get_cached_btype(ffi, finishlist) # force the item BType + BPtrItem = PointerType(self.item).get_cached_btype(ffi, finishlist) + return global_cache(self, ffi, 'new_array_type', BPtrItem, self.length) + +char_array_type = ArrayType(PrimitiveType('char'), None) + + +class StructOrUnionOrEnum(BaseTypeByIdentity): + _attrs_ = ('name',) + forcename = None + + def build_c_name_with_marker(self): + name = self.forcename or '%s %s' % (self.kind, self.name) + self.c_name_with_marker = name + '&' + + def force_the_name(self, forcename): + self.forcename = forcename + self.build_c_name_with_marker() + + def get_official_name(self): + assert self.c_name_with_marker.endswith('&') + return self.c_name_with_marker[:-1] + + +class StructOrUnion(StructOrUnionOrEnum): + fixedlayout = None + completed = 0 + partial = False + packed = 0 + + def __init__(self, name, fldnames, fldtypes, fldbitsize, fldquals=None): + self.name = name + self.fldnames = fldnames + self.fldtypes = fldtypes + self.fldbitsize = fldbitsize + self.fldquals = fldquals + self.build_c_name_with_marker() + + def anonymous_struct_fields(self): + if self.fldtypes is not None: + for name, type in zip(self.fldnames, self.fldtypes): + if name == '' and isinstance(type, StructOrUnion): + yield type + + def enumfields(self, expand_anonymous_struct_union=True): + fldquals = self.fldquals + if fldquals is None: + fldquals = (0,) * len(self.fldnames) + for name, type, bitsize, quals in zip(self.fldnames, self.fldtypes, + self.fldbitsize, fldquals): + if (name == '' and isinstance(type, StructOrUnion) + and expand_anonymous_struct_union): + # nested anonymous struct/union + for result in type.enumfields(): + yield result + else: + yield (name, type, bitsize, quals) + + def force_flatten(self): + # force the struct or union to have a declaration that lists + # directly all fields returned by enumfields(), flattening + # nested anonymous structs/unions. + names = [] + types = [] + bitsizes = [] + fldquals = [] + for name, type, bitsize, quals in self.enumfields(): + names.append(name) + types.append(type) + bitsizes.append(bitsize) + fldquals.append(quals) + self.fldnames = tuple(names) + self.fldtypes = tuple(types) + self.fldbitsize = tuple(bitsizes) + self.fldquals = tuple(fldquals) + + def get_cached_btype(self, ffi, finishlist, can_delay=False): + BType = StructOrUnionOrEnum.get_cached_btype(self, ffi, finishlist, + can_delay) + if not can_delay: + self.finish_backend_type(ffi, finishlist) + return BType + + def finish_backend_type(self, ffi, finishlist): + if self.completed: + if self.completed != 2: + raise NotImplementedError("recursive structure declaration " + "for '%s'" % (self.name,)) + return + BType = ffi._cached_btypes[self] + # + self.completed = 1 + # + if self.fldtypes is None: + pass # not completing it: it's an opaque struct + # + elif self.fixedlayout is None: + fldtypes = [tp.get_cached_btype(ffi, finishlist) + for tp in self.fldtypes] + lst = list(zip(self.fldnames, fldtypes, self.fldbitsize)) + extra_flags = () + if self.packed: + if self.packed == 1: + extra_flags = (8,) # SF_PACKED + else: + extra_flags = (0, self.packed) + ffi._backend.complete_struct_or_union(BType, lst, self, + -1, -1, *extra_flags) + # + else: + fldtypes = [] + fieldofs, fieldsize, totalsize, totalalignment = self.fixedlayout + for i in range(len(self.fldnames)): + fsize = fieldsize[i] + ftype = self.fldtypes[i] + # + if isinstance(ftype, ArrayType) and ftype.length_is_unknown(): + # fix the length to match the total size + BItemType = ftype.item.get_cached_btype(ffi, finishlist) + nlen, nrest = divmod(fsize, ffi.sizeof(BItemType)) + if nrest != 0: + self._verification_error( + "field '%s.%s' has a bogus size?" % ( + self.name, self.fldnames[i] or '{}')) + ftype = ftype.resolve_length(nlen) + self.fldtypes = (self.fldtypes[:i] + (ftype,) + + self.fldtypes[i+1:]) + # + BFieldType = ftype.get_cached_btype(ffi, finishlist) + if isinstance(ftype, ArrayType) and ftype.length is None: + assert fsize == 0 + else: + bitemsize = ffi.sizeof(BFieldType) + if bitemsize != fsize: + self._verification_error( + "field '%s.%s' is declared as %d bytes, but is " + "really %d bytes" % (self.name, + self.fldnames[i] or '{}', + bitemsize, fsize)) + fldtypes.append(BFieldType) + # + lst = list(zip(self.fldnames, fldtypes, self.fldbitsize, fieldofs)) + ffi._backend.complete_struct_or_union(BType, lst, self, + totalsize, totalalignment) + self.completed = 2 + + def _verification_error(self, msg): + raise VerificationError(msg) + + def check_not_partial(self): + if self.partial and self.fixedlayout is None: + raise VerificationMissing(self._get_c_name()) + + def build_backend_type(self, ffi, finishlist): + self.check_not_partial() + finishlist.append(self) + # + return global_cache(self, ffi, 'new_%s_type' % self.kind, + self.get_official_name(), key=self) + + +class StructType(StructOrUnion): + kind = 'struct' + + +class UnionType(StructOrUnion): + kind = 'union' + + +class EnumType(StructOrUnionOrEnum): + kind = 'enum' + partial = False + partial_resolved = False + + def __init__(self, name, enumerators, enumvalues, baseinttype=None): + self.name = name + self.enumerators = enumerators + self.enumvalues = enumvalues + self.baseinttype = baseinttype + self.build_c_name_with_marker() + + def force_the_name(self, forcename): + StructOrUnionOrEnum.force_the_name(self, forcename) + if self.forcename is None: + name = self.get_official_name() + self.forcename = '$' + name.replace(' ', '_') + + def check_not_partial(self): + if self.partial and not self.partial_resolved: + raise VerificationMissing(self._get_c_name()) + + def build_backend_type(self, ffi, finishlist): + self.check_not_partial() + base_btype = self.build_baseinttype(ffi, finishlist) + return global_cache(self, ffi, 'new_enum_type', + self.get_official_name(), + self.enumerators, self.enumvalues, + base_btype, key=self) + + def build_baseinttype(self, ffi, finishlist): + if self.baseinttype is not None: + return self.baseinttype.get_cached_btype(ffi, finishlist) + # + if self.enumvalues: + smallest_value = min(self.enumvalues) + largest_value = max(self.enumvalues) + else: + import warnings + try: + # XXX! The goal is to ensure that the warnings.warn() + # will not suppress the warning. We want to get it + # several times if we reach this point several times. + __warningregistry__.clear() + except NameError: + pass + warnings.warn("%r has no values explicitly defined; " + "guessing that it is equivalent to 'unsigned int'" + % self._get_c_name()) + smallest_value = largest_value = 0 + if smallest_value < 0: # needs a signed type + sign = 1 + candidate1 = PrimitiveType("int") + candidate2 = PrimitiveType("long") + else: + sign = 0 + candidate1 = PrimitiveType("unsigned int") + candidate2 = PrimitiveType("unsigned long") + btype1 = candidate1.get_cached_btype(ffi, finishlist) + btype2 = candidate2.get_cached_btype(ffi, finishlist) + size1 = ffi.sizeof(btype1) + size2 = ffi.sizeof(btype2) + if (smallest_value >= ((-1) << (8*size1-1)) and + largest_value < (1 << (8*size1-sign))): + return btype1 + if (smallest_value >= ((-1) << (8*size2-1)) and + largest_value < (1 << (8*size2-sign))): + return btype2 + raise CDefError("%s values don't all fit into either 'long' " + "or 'unsigned long'" % self._get_c_name()) + +def unknown_type(name, structname=None): + if structname is None: + structname = '$%s' % name + tp = StructType(structname, None, None, None) + tp.force_the_name(name) + tp.origin = "unknown_type" + return tp + +def unknown_ptr_type(name, structname=None): + if structname is None: + structname = '$$%s' % name + tp = StructType(structname, None, None, None) + return NamedPointerType(tp, name) + + +global_lock = allocate_lock() +_typecache_cffi_backend = weakref.WeakValueDictionary() + +def get_typecache(backend): + # returns _typecache_cffi_backend if backend is the _cffi_backend + # module, or type(backend).__typecache if backend is an instance of + # CTypesBackend (or some FakeBackend class during tests) + if isinstance(backend, types.ModuleType): + return _typecache_cffi_backend + with global_lock: + if not hasattr(type(backend), '__typecache'): + type(backend).__typecache = weakref.WeakValueDictionary() + return type(backend).__typecache + +def global_cache(srctype, ffi, funcname, *args, **kwds): + key = kwds.pop('key', (funcname, args)) + assert not kwds + try: + return ffi._typecache[key] + except KeyError: + pass + try: + res = getattr(ffi._backend, funcname)(*args) + except NotImplementedError as e: + raise NotImplementedError("%s: %r: %s" % (funcname, srctype, e)) + # note that setdefault() on WeakValueDictionary is not atomic + # and contains a rare bug (http://bugs.python.org/issue19542); + # we have to use a lock and do it ourselves + cache = ffi._typecache + with global_lock: + res1 = cache.get(key) + if res1 is None: + cache[key] = res + return res + else: + return res1 + +def pointer_cache(ffi, BType): + return global_cache('?', ffi, 'new_pointer_type', BType) + +def attach_exception_info(e, name): + if e.args and type(e.args[0]) is str: + e.args = ('%s: %s' % (name, e.args[0]),) + e.args[1:] diff --git a/templates/skills/file_manager/dependencies/cffi/parse_c_type.h b/templates/skills/file_manager/dependencies/cffi/parse_c_type.h new file mode 100644 index 00000000..84e4ef85 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/parse_c_type.h @@ -0,0 +1,181 @@ + +/* This part is from file 'cffi/parse_c_type.h'. It is copied at the + beginning of C sources generated by CFFI's ffi.set_source(). */ + +typedef void *_cffi_opcode_t; + +#define _CFFI_OP(opcode, arg) (_cffi_opcode_t)(opcode | (((uintptr_t)(arg)) << 8)) +#define _CFFI_GETOP(cffi_opcode) ((unsigned char)(uintptr_t)cffi_opcode) +#define _CFFI_GETARG(cffi_opcode) (((intptr_t)cffi_opcode) >> 8) + +#define _CFFI_OP_PRIMITIVE 1 +#define _CFFI_OP_POINTER 3 +#define _CFFI_OP_ARRAY 5 +#define _CFFI_OP_OPEN_ARRAY 7 +#define _CFFI_OP_STRUCT_UNION 9 +#define _CFFI_OP_ENUM 11 +#define _CFFI_OP_FUNCTION 13 +#define _CFFI_OP_FUNCTION_END 15 +#define _CFFI_OP_NOOP 17 +#define _CFFI_OP_BITFIELD 19 +#define _CFFI_OP_TYPENAME 21 +#define _CFFI_OP_CPYTHON_BLTN_V 23 // varargs +#define _CFFI_OP_CPYTHON_BLTN_N 25 // noargs +#define _CFFI_OP_CPYTHON_BLTN_O 27 // O (i.e. a single arg) +#define _CFFI_OP_CONSTANT 29 +#define _CFFI_OP_CONSTANT_INT 31 +#define _CFFI_OP_GLOBAL_VAR 33 +#define _CFFI_OP_DLOPEN_FUNC 35 +#define _CFFI_OP_DLOPEN_CONST 37 +#define _CFFI_OP_GLOBAL_VAR_F 39 +#define _CFFI_OP_EXTERN_PYTHON 41 + +#define _CFFI_PRIM_VOID 0 +#define _CFFI_PRIM_BOOL 1 +#define _CFFI_PRIM_CHAR 2 +#define _CFFI_PRIM_SCHAR 3 +#define _CFFI_PRIM_UCHAR 4 +#define _CFFI_PRIM_SHORT 5 +#define _CFFI_PRIM_USHORT 6 +#define _CFFI_PRIM_INT 7 +#define _CFFI_PRIM_UINT 8 +#define _CFFI_PRIM_LONG 9 +#define _CFFI_PRIM_ULONG 10 +#define _CFFI_PRIM_LONGLONG 11 +#define _CFFI_PRIM_ULONGLONG 12 +#define _CFFI_PRIM_FLOAT 13 +#define _CFFI_PRIM_DOUBLE 14 +#define _CFFI_PRIM_LONGDOUBLE 15 + +#define _CFFI_PRIM_WCHAR 16 +#define _CFFI_PRIM_INT8 17 +#define _CFFI_PRIM_UINT8 18 +#define _CFFI_PRIM_INT16 19 +#define _CFFI_PRIM_UINT16 20 +#define _CFFI_PRIM_INT32 21 +#define _CFFI_PRIM_UINT32 22 +#define _CFFI_PRIM_INT64 23 +#define _CFFI_PRIM_UINT64 24 +#define _CFFI_PRIM_INTPTR 25 +#define _CFFI_PRIM_UINTPTR 26 +#define _CFFI_PRIM_PTRDIFF 27 +#define _CFFI_PRIM_SIZE 28 +#define _CFFI_PRIM_SSIZE 29 +#define _CFFI_PRIM_INT_LEAST8 30 +#define _CFFI_PRIM_UINT_LEAST8 31 +#define _CFFI_PRIM_INT_LEAST16 32 +#define _CFFI_PRIM_UINT_LEAST16 33 +#define _CFFI_PRIM_INT_LEAST32 34 +#define _CFFI_PRIM_UINT_LEAST32 35 +#define _CFFI_PRIM_INT_LEAST64 36 +#define _CFFI_PRIM_UINT_LEAST64 37 +#define _CFFI_PRIM_INT_FAST8 38 +#define _CFFI_PRIM_UINT_FAST8 39 +#define _CFFI_PRIM_INT_FAST16 40 +#define _CFFI_PRIM_UINT_FAST16 41 +#define _CFFI_PRIM_INT_FAST32 42 +#define _CFFI_PRIM_UINT_FAST32 43 +#define _CFFI_PRIM_INT_FAST64 44 +#define _CFFI_PRIM_UINT_FAST64 45 +#define _CFFI_PRIM_INTMAX 46 +#define _CFFI_PRIM_UINTMAX 47 +#define _CFFI_PRIM_FLOATCOMPLEX 48 +#define _CFFI_PRIM_DOUBLECOMPLEX 49 +#define _CFFI_PRIM_CHAR16 50 +#define _CFFI_PRIM_CHAR32 51 + +#define _CFFI__NUM_PRIM 52 +#define _CFFI__UNKNOWN_PRIM (-1) +#define _CFFI__UNKNOWN_FLOAT_PRIM (-2) +#define _CFFI__UNKNOWN_LONG_DOUBLE (-3) + +#define _CFFI__IO_FILE_STRUCT (-1) + + +struct _cffi_global_s { + const char *name; + void *address; + _cffi_opcode_t type_op; + void *size_or_direct_fn; // OP_GLOBAL_VAR: size, or 0 if unknown + // OP_CPYTHON_BLTN_*: addr of direct function +}; + +struct _cffi_getconst_s { + unsigned long long value; + const struct _cffi_type_context_s *ctx; + int gindex; +}; + +struct _cffi_struct_union_s { + const char *name; + int type_index; // -> _cffi_types, on a OP_STRUCT_UNION + int flags; // _CFFI_F_* flags below + size_t size; + int alignment; + int first_field_index; // -> _cffi_fields array + int num_fields; +}; +#define _CFFI_F_UNION 0x01 // is a union, not a struct +#define _CFFI_F_CHECK_FIELDS 0x02 // complain if fields are not in the + // "standard layout" or if some are missing +#define _CFFI_F_PACKED 0x04 // for CHECK_FIELDS, assume a packed struct +#define _CFFI_F_EXTERNAL 0x08 // in some other ffi.include() +#define _CFFI_F_OPAQUE 0x10 // opaque + +struct _cffi_field_s { + const char *name; + size_t field_offset; + size_t field_size; + _cffi_opcode_t field_type_op; +}; + +struct _cffi_enum_s { + const char *name; + int type_index; // -> _cffi_types, on a OP_ENUM + int type_prim; // _CFFI_PRIM_xxx + const char *enumerators; // comma-delimited string +}; + +struct _cffi_typename_s { + const char *name; + int type_index; /* if opaque, points to a possibly artificial + OP_STRUCT which is itself opaque */ +}; + +struct _cffi_type_context_s { + _cffi_opcode_t *types; + const struct _cffi_global_s *globals; + const struct _cffi_field_s *fields; + const struct _cffi_struct_union_s *struct_unions; + const struct _cffi_enum_s *enums; + const struct _cffi_typename_s *typenames; + int num_globals; + int num_struct_unions; + int num_enums; + int num_typenames; + const char *const *includes; + int num_types; + int flags; /* future extension */ +}; + +struct _cffi_parse_info_s { + const struct _cffi_type_context_s *ctx; + _cffi_opcode_t *output; + unsigned int output_size; + size_t error_location; + const char *error_message; +}; + +struct _cffi_externpy_s { + const char *name; + size_t size_of_result; + void *reserved1, *reserved2; +}; + +#ifdef _CFFI_INTERNAL +static int parse_c_type(struct _cffi_parse_info_s *info, const char *input); +static int search_in_globals(const struct _cffi_type_context_s *ctx, + const char *search, size_t search_len); +static int search_in_struct_unions(const struct _cffi_type_context_s *ctx, + const char *search, size_t search_len); +#endif diff --git a/templates/skills/file_manager/dependencies/cffi/pkgconfig.py b/templates/skills/file_manager/dependencies/cffi/pkgconfig.py new file mode 100644 index 00000000..5c93f15a --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/pkgconfig.py @@ -0,0 +1,121 @@ +# pkg-config, https://www.freedesktop.org/wiki/Software/pkg-config/ integration for cffi +import sys, os, subprocess + +from .error import PkgConfigError + + +def merge_flags(cfg1, cfg2): + """Merge values from cffi config flags cfg2 to cf1 + + Example: + merge_flags({"libraries": ["one"]}, {"libraries": ["two"]}) + {"libraries": ["one", "two"]} + """ + for key, value in cfg2.items(): + if key not in cfg1: + cfg1[key] = value + else: + if not isinstance(cfg1[key], list): + raise TypeError("cfg1[%r] should be a list of strings" % (key,)) + if not isinstance(value, list): + raise TypeError("cfg2[%r] should be a list of strings" % (key,)) + cfg1[key].extend(value) + return cfg1 + + +def call(libname, flag, encoding=sys.getfilesystemencoding()): + """Calls pkg-config and returns the output if found + """ + a = ["pkg-config", "--print-errors"] + a.append(flag) + a.append(libname) + try: + pc = subprocess.Popen(a, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + except EnvironmentError as e: + raise PkgConfigError("cannot run pkg-config: %s" % (str(e).strip(),)) + + bout, berr = pc.communicate() + if pc.returncode != 0: + try: + berr = berr.decode(encoding) + except Exception: + pass + raise PkgConfigError(berr.strip()) + + if sys.version_info >= (3,) and not isinstance(bout, str): # Python 3.x + try: + bout = bout.decode(encoding) + except UnicodeDecodeError: + raise PkgConfigError("pkg-config %s %s returned bytes that cannot " + "be decoded with encoding %r:\n%r" % + (flag, libname, encoding, bout)) + + if os.altsep != '\\' and '\\' in bout: + raise PkgConfigError("pkg-config %s %s returned an unsupported " + "backslash-escaped output:\n%r" % + (flag, libname, bout)) + return bout + + +def flags_from_pkgconfig(libs): + r"""Return compiler line flags for FFI.set_source based on pkg-config output + + Usage + ... + ffibuilder.set_source("_foo", pkgconfig = ["libfoo", "libbar >= 1.8.3"]) + + If pkg-config is installed on build machine, then arguments include_dirs, + library_dirs, libraries, define_macros, extra_compile_args and + extra_link_args are extended with an output of pkg-config for libfoo and + libbar. + + Raises PkgConfigError in case the pkg-config call fails. + """ + + def get_include_dirs(string): + return [x[2:] for x in string.split() if x.startswith("-I")] + + def get_library_dirs(string): + return [x[2:] for x in string.split() if x.startswith("-L")] + + def get_libraries(string): + return [x[2:] for x in string.split() if x.startswith("-l")] + + # convert -Dfoo=bar to list of tuples [("foo", "bar")] expected by distutils + def get_macros(string): + def _macro(x): + x = x[2:] # drop "-D" + if '=' in x: + return tuple(x.split("=", 1)) # "-Dfoo=bar" => ("foo", "bar") + else: + return (x, None) # "-Dfoo" => ("foo", None) + return [_macro(x) for x in string.split() if x.startswith("-D")] + + def get_other_cflags(string): + return [x for x in string.split() if not x.startswith("-I") and + not x.startswith("-D")] + + def get_other_libs(string): + return [x for x in string.split() if not x.startswith("-L") and + not x.startswith("-l")] + + # return kwargs for given libname + def kwargs(libname): + fse = sys.getfilesystemencoding() + all_cflags = call(libname, "--cflags") + all_libs = call(libname, "--libs") + return { + "include_dirs": get_include_dirs(all_cflags), + "library_dirs": get_library_dirs(all_libs), + "libraries": get_libraries(all_libs), + "define_macros": get_macros(all_cflags), + "extra_compile_args": get_other_cflags(all_cflags), + "extra_link_args": get_other_libs(all_libs), + } + + # merge all arguments together + ret = {} + for libname in libs: + lib_flags = kwargs(libname) + merge_flags(ret, lib_flags) + return ret diff --git a/templates/skills/file_manager/dependencies/cffi/recompiler.py b/templates/skills/file_manager/dependencies/cffi/recompiler.py new file mode 100644 index 00000000..57781a3c --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/recompiler.py @@ -0,0 +1,1598 @@ +import os, sys, io +from . import ffiplatform, model +from .error import VerificationError +from .cffi_opcode import * + +VERSION_BASE = 0x2601 +VERSION_EMBEDDED = 0x2701 +VERSION_CHAR16CHAR32 = 0x2801 + +USE_LIMITED_API = (sys.platform != 'win32' or sys.version_info < (3, 0) or + sys.version_info >= (3, 5)) + + +class GlobalExpr: + def __init__(self, name, address, type_op, size=0, check_value=0): + self.name = name + self.address = address + self.type_op = type_op + self.size = size + self.check_value = check_value + + def as_c_expr(self): + return ' { "%s", (void *)%s, %s, (void *)%s },' % ( + self.name, self.address, self.type_op.as_c_expr(), self.size) + + def as_python_expr(self): + return "b'%s%s',%d" % (self.type_op.as_python_bytes(), self.name, + self.check_value) + +class FieldExpr: + def __init__(self, name, field_offset, field_size, fbitsize, field_type_op): + self.name = name + self.field_offset = field_offset + self.field_size = field_size + self.fbitsize = fbitsize + self.field_type_op = field_type_op + + def as_c_expr(self): + spaces = " " * len(self.name) + return (' { "%s", %s,\n' % (self.name, self.field_offset) + + ' %s %s,\n' % (spaces, self.field_size) + + ' %s %s },' % (spaces, self.field_type_op.as_c_expr())) + + def as_python_expr(self): + raise NotImplementedError + + def as_field_python_expr(self): + if self.field_type_op.op == OP_NOOP: + size_expr = '' + elif self.field_type_op.op == OP_BITFIELD: + size_expr = format_four_bytes(self.fbitsize) + else: + raise NotImplementedError + return "b'%s%s%s'" % (self.field_type_op.as_python_bytes(), + size_expr, + self.name) + +class StructUnionExpr: + def __init__(self, name, type_index, flags, size, alignment, comment, + first_field_index, c_fields): + self.name = name + self.type_index = type_index + self.flags = flags + self.size = size + self.alignment = alignment + self.comment = comment + self.first_field_index = first_field_index + self.c_fields = c_fields + + def as_c_expr(self): + return (' { "%s", %d, %s,' % (self.name, self.type_index, self.flags) + + '\n %s, %s, ' % (self.size, self.alignment) + + '%d, %d ' % (self.first_field_index, len(self.c_fields)) + + ('/* %s */ ' % self.comment if self.comment else '') + + '},') + + def as_python_expr(self): + flags = eval(self.flags, G_FLAGS) + fields_expr = [c_field.as_field_python_expr() + for c_field in self.c_fields] + return "(b'%s%s%s',%s)" % ( + format_four_bytes(self.type_index), + format_four_bytes(flags), + self.name, + ','.join(fields_expr)) + +class EnumExpr: + def __init__(self, name, type_index, size, signed, allenums): + self.name = name + self.type_index = type_index + self.size = size + self.signed = signed + self.allenums = allenums + + def as_c_expr(self): + return (' { "%s", %d, _cffi_prim_int(%s, %s),\n' + ' "%s" },' % (self.name, self.type_index, + self.size, self.signed, self.allenums)) + + def as_python_expr(self): + prim_index = { + (1, 0): PRIM_UINT8, (1, 1): PRIM_INT8, + (2, 0): PRIM_UINT16, (2, 1): PRIM_INT16, + (4, 0): PRIM_UINT32, (4, 1): PRIM_INT32, + (8, 0): PRIM_UINT64, (8, 1): PRIM_INT64, + }[self.size, self.signed] + return "b'%s%s%s\\x00%s'" % (format_four_bytes(self.type_index), + format_four_bytes(prim_index), + self.name, self.allenums) + +class TypenameExpr: + def __init__(self, name, type_index): + self.name = name + self.type_index = type_index + + def as_c_expr(self): + return ' { "%s", %d },' % (self.name, self.type_index) + + def as_python_expr(self): + return "b'%s%s'" % (format_four_bytes(self.type_index), self.name) + + +# ____________________________________________________________ + + +class Recompiler: + _num_externpy = 0 + + def __init__(self, ffi, module_name, target_is_python=False): + self.ffi = ffi + self.module_name = module_name + self.target_is_python = target_is_python + self._version = VERSION_BASE + + def needs_version(self, ver): + self._version = max(self._version, ver) + + def collect_type_table(self): + self._typesdict = {} + self._generate("collecttype") + # + all_decls = sorted(self._typesdict, key=str) + # + # prepare all FUNCTION bytecode sequences first + self.cffi_types = [] + for tp in all_decls: + if tp.is_raw_function: + assert self._typesdict[tp] is None + self._typesdict[tp] = len(self.cffi_types) + self.cffi_types.append(tp) # placeholder + for tp1 in tp.args: + assert isinstance(tp1, (model.VoidType, + model.BasePrimitiveType, + model.PointerType, + model.StructOrUnionOrEnum, + model.FunctionPtrType)) + if self._typesdict[tp1] is None: + self._typesdict[tp1] = len(self.cffi_types) + self.cffi_types.append(tp1) # placeholder + self.cffi_types.append('END') # placeholder + # + # prepare all OTHER bytecode sequences + for tp in all_decls: + if not tp.is_raw_function and self._typesdict[tp] is None: + self._typesdict[tp] = len(self.cffi_types) + self.cffi_types.append(tp) # placeholder + if tp.is_array_type and tp.length is not None: + self.cffi_types.append('LEN') # placeholder + assert None not in self._typesdict.values() + # + # collect all structs and unions and enums + self._struct_unions = {} + self._enums = {} + for tp in all_decls: + if isinstance(tp, model.StructOrUnion): + self._struct_unions[tp] = None + elif isinstance(tp, model.EnumType): + self._enums[tp] = None + for i, tp in enumerate(sorted(self._struct_unions, + key=lambda tp: tp.name)): + self._struct_unions[tp] = i + for i, tp in enumerate(sorted(self._enums, + key=lambda tp: tp.name)): + self._enums[tp] = i + # + # emit all bytecode sequences now + for tp in all_decls: + method = getattr(self, '_emit_bytecode_' + tp.__class__.__name__) + method(tp, self._typesdict[tp]) + # + # consistency check + for op in self.cffi_types: + assert isinstance(op, CffiOp) + self.cffi_types = tuple(self.cffi_types) # don't change any more + + def _enum_fields(self, tp): + # When producing C, expand all anonymous struct/union fields. + # That's necessary to have C code checking the offsets of the + # individual fields contained in them. When producing Python, + # don't do it and instead write it like it is, with the + # corresponding fields having an empty name. Empty names are + # recognized at runtime when we import the generated Python + # file. + expand_anonymous_struct_union = not self.target_is_python + return tp.enumfields(expand_anonymous_struct_union) + + def _do_collect_type(self, tp): + if not isinstance(tp, model.BaseTypeByIdentity): + if isinstance(tp, tuple): + for x in tp: + self._do_collect_type(x) + return + if tp not in self._typesdict: + self._typesdict[tp] = None + if isinstance(tp, model.FunctionPtrType): + self._do_collect_type(tp.as_raw_function()) + elif isinstance(tp, model.StructOrUnion): + if tp.fldtypes is not None and ( + tp not in self.ffi._parser._included_declarations): + for name1, tp1, _, _ in self._enum_fields(tp): + self._do_collect_type(self._field_type(tp, name1, tp1)) + else: + for _, x in tp._get_items(): + self._do_collect_type(x) + + def _generate(self, step_name): + lst = self.ffi._parser._declarations.items() + for name, (tp, quals) in sorted(lst): + kind, realname = name.split(' ', 1) + try: + method = getattr(self, '_generate_cpy_%s_%s' % (kind, + step_name)) + except AttributeError: + raise VerificationError( + "not implemented in recompile(): %r" % name) + try: + self._current_quals = quals + method(tp, realname) + except Exception as e: + model.attach_exception_info(e, name) + raise + + # ---------- + + ALL_STEPS = ["global", "field", "struct_union", "enum", "typename"] + + def collect_step_tables(self): + # collect the declarations for '_cffi_globals', '_cffi_typenames', etc. + self._lsts = {} + for step_name in self.ALL_STEPS: + self._lsts[step_name] = [] + self._seen_struct_unions = set() + self._generate("ctx") + self._add_missing_struct_unions() + # + for step_name in self.ALL_STEPS: + lst = self._lsts[step_name] + if step_name != "field": + lst.sort(key=lambda entry: entry.name) + self._lsts[step_name] = tuple(lst) # don't change any more + # + # check for a possible internal inconsistency: _cffi_struct_unions + # should have been generated with exactly self._struct_unions + lst = self._lsts["struct_union"] + for tp, i in self._struct_unions.items(): + assert i < len(lst) + assert lst[i].name == tp.name + assert len(lst) == len(self._struct_unions) + # same with enums + lst = self._lsts["enum"] + for tp, i in self._enums.items(): + assert i < len(lst) + assert lst[i].name == tp.name + assert len(lst) == len(self._enums) + + # ---------- + + def _prnt(self, what=''): + self._f.write(what + '\n') + + def write_source_to_f(self, f, preamble): + if self.target_is_python: + assert preamble is None + self.write_py_source_to_f(f) + else: + assert preamble is not None + self.write_c_source_to_f(f, preamble) + + def _rel_readlines(self, filename): + g = open(os.path.join(os.path.dirname(__file__), filename), 'r') + lines = g.readlines() + g.close() + return lines + + def write_c_source_to_f(self, f, preamble): + self._f = f + prnt = self._prnt + if self.ffi._embedding is not None: + prnt('#define _CFFI_USE_EMBEDDING') + if not USE_LIMITED_API: + prnt('#define _CFFI_NO_LIMITED_API') + # + # first the '#include' (actually done by inlining the file's content) + lines = self._rel_readlines('_cffi_include.h') + i = lines.index('#include "parse_c_type.h"\n') + lines[i:i+1] = self._rel_readlines('parse_c_type.h') + prnt(''.join(lines)) + # + # if we have ffi._embedding != None, we give it here as a macro + # and include an extra file + base_module_name = self.module_name.split('.')[-1] + if self.ffi._embedding is not None: + prnt('#define _CFFI_MODULE_NAME "%s"' % (self.module_name,)) + prnt('static const char _CFFI_PYTHON_STARTUP_CODE[] = {') + self._print_string_literal_in_array(self.ffi._embedding) + prnt('0 };') + prnt('#ifdef PYPY_VERSION') + prnt('# define _CFFI_PYTHON_STARTUP_FUNC _cffi_pypyinit_%s' % ( + base_module_name,)) + prnt('#elif PY_MAJOR_VERSION >= 3') + prnt('# define _CFFI_PYTHON_STARTUP_FUNC PyInit_%s' % ( + base_module_name,)) + prnt('#else') + prnt('# define _CFFI_PYTHON_STARTUP_FUNC init%s' % ( + base_module_name,)) + prnt('#endif') + lines = self._rel_readlines('_embedding.h') + i = lines.index('#include "_cffi_errors.h"\n') + lines[i:i+1] = self._rel_readlines('_cffi_errors.h') + prnt(''.join(lines)) + self.needs_version(VERSION_EMBEDDED) + # + # then paste the C source given by the user, verbatim. + prnt('/************************************************************/') + prnt() + prnt(preamble) + prnt() + prnt('/************************************************************/') + prnt() + # + # the declaration of '_cffi_types' + prnt('static void *_cffi_types[] = {') + typeindex2type = dict([(i, tp) for (tp, i) in self._typesdict.items()]) + for i, op in enumerate(self.cffi_types): + comment = '' + if i in typeindex2type: + comment = ' // ' + typeindex2type[i]._get_c_name() + prnt('/* %2d */ %s,%s' % (i, op.as_c_expr(), comment)) + if not self.cffi_types: + prnt(' 0') + prnt('};') + prnt() + # + # call generate_cpy_xxx_decl(), for every xxx found from + # ffi._parser._declarations. This generates all the functions. + self._seen_constants = set() + self._generate("decl") + # + # the declaration of '_cffi_globals' and '_cffi_typenames' + nums = {} + for step_name in self.ALL_STEPS: + lst = self._lsts[step_name] + nums[step_name] = len(lst) + if nums[step_name] > 0: + prnt('static const struct _cffi_%s_s _cffi_%ss[] = {' % ( + step_name, step_name)) + for entry in lst: + prnt(entry.as_c_expr()) + prnt('};') + prnt() + # + # the declaration of '_cffi_includes' + if self.ffi._included_ffis: + prnt('static const char * const _cffi_includes[] = {') + for ffi_to_include in self.ffi._included_ffis: + try: + included_module_name, included_source = ( + ffi_to_include._assigned_source[:2]) + except AttributeError: + raise VerificationError( + "ffi object %r includes %r, but the latter has not " + "been prepared with set_source()" % ( + self.ffi, ffi_to_include,)) + if included_source is None: + raise VerificationError( + "not implemented yet: ffi.include() of a Python-based " + "ffi inside a C-based ffi") + prnt(' "%s",' % (included_module_name,)) + prnt(' NULL') + prnt('};') + prnt() + # + # the declaration of '_cffi_type_context' + prnt('static const struct _cffi_type_context_s _cffi_type_context = {') + prnt(' _cffi_types,') + for step_name in self.ALL_STEPS: + if nums[step_name] > 0: + prnt(' _cffi_%ss,' % step_name) + else: + prnt(' NULL, /* no %ss */' % step_name) + for step_name in self.ALL_STEPS: + if step_name != "field": + prnt(' %d, /* num_%ss */' % (nums[step_name], step_name)) + if self.ffi._included_ffis: + prnt(' _cffi_includes,') + else: + prnt(' NULL, /* no includes */') + prnt(' %d, /* num_types */' % (len(self.cffi_types),)) + flags = 0 + if self._num_externpy > 0 or self.ffi._embedding is not None: + flags |= 1 # set to mean that we use extern "Python" + prnt(' %d, /* flags */' % flags) + prnt('};') + prnt() + # + # the init function + prnt('#ifdef __GNUC__') + prnt('# pragma GCC visibility push(default) /* for -fvisibility= */') + prnt('#endif') + prnt() + prnt('#ifdef PYPY_VERSION') + prnt('PyMODINIT_FUNC') + prnt('_cffi_pypyinit_%s(const void *p[])' % (base_module_name,)) + prnt('{') + if flags & 1: + prnt(' if (((intptr_t)p[0]) >= 0x0A03) {') + prnt(' _cffi_call_python_org = ' + '(void(*)(struct _cffi_externpy_s *, char *))p[1];') + prnt(' }') + prnt(' p[0] = (const void *)0x%x;' % self._version) + prnt(' p[1] = &_cffi_type_context;') + prnt('#if PY_MAJOR_VERSION >= 3') + prnt(' return NULL;') + prnt('#endif') + prnt('}') + # on Windows, distutils insists on putting init_cffi_xyz in + # 'export_symbols', so instead of fighting it, just give up and + # give it one + prnt('# ifdef _MSC_VER') + prnt(' PyMODINIT_FUNC') + prnt('# if PY_MAJOR_VERSION >= 3') + prnt(' PyInit_%s(void) { return NULL; }' % (base_module_name,)) + prnt('# else') + prnt(' init%s(void) { }' % (base_module_name,)) + prnt('# endif') + prnt('# endif') + prnt('#elif PY_MAJOR_VERSION >= 3') + prnt('PyMODINIT_FUNC') + prnt('PyInit_%s(void)' % (base_module_name,)) + prnt('{') + prnt(' return _cffi_init("%s", 0x%x, &_cffi_type_context);' % ( + self.module_name, self._version)) + prnt('}') + prnt('#else') + prnt('PyMODINIT_FUNC') + prnt('init%s(void)' % (base_module_name,)) + prnt('{') + prnt(' _cffi_init("%s", 0x%x, &_cffi_type_context);' % ( + self.module_name, self._version)) + prnt('}') + prnt('#endif') + prnt() + prnt('#ifdef __GNUC__') + prnt('# pragma GCC visibility pop') + prnt('#endif') + self._version = None + + def _to_py(self, x): + if isinstance(x, str): + return "b'%s'" % (x,) + if isinstance(x, (list, tuple)): + rep = [self._to_py(item) for item in x] + if len(rep) == 1: + rep.append('') + return "(%s)" % (','.join(rep),) + return x.as_python_expr() # Py2: unicode unexpected; Py3: bytes unexp. + + def write_py_source_to_f(self, f): + self._f = f + prnt = self._prnt + # + # header + prnt("# auto-generated file") + prnt("import _cffi_backend") + # + # the 'import' of the included ffis + num_includes = len(self.ffi._included_ffis or ()) + for i in range(num_includes): + ffi_to_include = self.ffi._included_ffis[i] + try: + included_module_name, included_source = ( + ffi_to_include._assigned_source[:2]) + except AttributeError: + raise VerificationError( + "ffi object %r includes %r, but the latter has not " + "been prepared with set_source()" % ( + self.ffi, ffi_to_include,)) + if included_source is not None: + raise VerificationError( + "not implemented yet: ffi.include() of a C-based " + "ffi inside a Python-based ffi") + prnt('from %s import ffi as _ffi%d' % (included_module_name, i)) + prnt() + prnt("ffi = _cffi_backend.FFI('%s'," % (self.module_name,)) + prnt(" _version = 0x%x," % (self._version,)) + self._version = None + # + # the '_types' keyword argument + self.cffi_types = tuple(self.cffi_types) # don't change any more + types_lst = [op.as_python_bytes() for op in self.cffi_types] + prnt(' _types = %s,' % (self._to_py(''.join(types_lst)),)) + typeindex2type = dict([(i, tp) for (tp, i) in self._typesdict.items()]) + # + # the keyword arguments from ALL_STEPS + for step_name in self.ALL_STEPS: + lst = self._lsts[step_name] + if len(lst) > 0 and step_name != "field": + prnt(' _%ss = %s,' % (step_name, self._to_py(lst))) + # + # the '_includes' keyword argument + if num_includes > 0: + prnt(' _includes = (%s,),' % ( + ', '.join(['_ffi%d' % i for i in range(num_includes)]),)) + # + # the footer + prnt(')') + + # ---------- + + def _gettypenum(self, type): + # a KeyError here is a bug. please report it! :-) + return self._typesdict[type] + + def _convert_funcarg_to_c(self, tp, fromvar, tovar, errcode): + extraarg = '' + if isinstance(tp, model.BasePrimitiveType) and not tp.is_complex_type(): + if tp.is_integer_type() and tp.name != '_Bool': + converter = '_cffi_to_c_int' + extraarg = ', %s' % tp.name + elif isinstance(tp, model.UnknownFloatType): + # don't check with is_float_type(): it may be a 'long + # double' here, and _cffi_to_c_double would loose precision + converter = '(%s)_cffi_to_c_double' % (tp.get_c_name(''),) + else: + cname = tp.get_c_name('') + converter = '(%s)_cffi_to_c_%s' % (cname, + tp.name.replace(' ', '_')) + if cname in ('char16_t', 'char32_t'): + self.needs_version(VERSION_CHAR16CHAR32) + errvalue = '-1' + # + elif isinstance(tp, model.PointerType): + self._convert_funcarg_to_c_ptr_or_array(tp, fromvar, + tovar, errcode) + return + # + elif (isinstance(tp, model.StructOrUnionOrEnum) or + isinstance(tp, model.BasePrimitiveType)): + # a struct (not a struct pointer) as a function argument; + # or, a complex (the same code works) + self._prnt(' if (_cffi_to_c((char *)&%s, _cffi_type(%d), %s) < 0)' + % (tovar, self._gettypenum(tp), fromvar)) + self._prnt(' %s;' % errcode) + return + # + elif isinstance(tp, model.FunctionPtrType): + converter = '(%s)_cffi_to_c_pointer' % tp.get_c_name('') + extraarg = ', _cffi_type(%d)' % self._gettypenum(tp) + errvalue = 'NULL' + # + else: + raise NotImplementedError(tp) + # + self._prnt(' %s = %s(%s%s);' % (tovar, converter, fromvar, extraarg)) + self._prnt(' if (%s == (%s)%s && PyErr_Occurred())' % ( + tovar, tp.get_c_name(''), errvalue)) + self._prnt(' %s;' % errcode) + + def _extra_local_variables(self, tp, localvars, freelines): + if isinstance(tp, model.PointerType): + localvars.add('Py_ssize_t datasize') + localvars.add('struct _cffi_freeme_s *large_args_free = NULL') + freelines.add('if (large_args_free != NULL)' + ' _cffi_free_array_arguments(large_args_free);') + + def _convert_funcarg_to_c_ptr_or_array(self, tp, fromvar, tovar, errcode): + self._prnt(' datasize = _cffi_prepare_pointer_call_argument(') + self._prnt(' _cffi_type(%d), %s, (char **)&%s);' % ( + self._gettypenum(tp), fromvar, tovar)) + self._prnt(' if (datasize != 0) {') + self._prnt(' %s = ((size_t)datasize) <= 640 ? ' + '(%s)alloca((size_t)datasize) : NULL;' % ( + tovar, tp.get_c_name(''))) + self._prnt(' if (_cffi_convert_array_argument(_cffi_type(%d), %s, ' + '(char **)&%s,' % (self._gettypenum(tp), fromvar, tovar)) + self._prnt(' datasize, &large_args_free) < 0)') + self._prnt(' %s;' % errcode) + self._prnt(' }') + + def _convert_expr_from_c(self, tp, var, context): + if isinstance(tp, model.BasePrimitiveType): + if tp.is_integer_type() and tp.name != '_Bool': + return '_cffi_from_c_int(%s, %s)' % (var, tp.name) + elif isinstance(tp, model.UnknownFloatType): + return '_cffi_from_c_double(%s)' % (var,) + elif tp.name != 'long double' and not tp.is_complex_type(): + cname = tp.name.replace(' ', '_') + if cname in ('char16_t', 'char32_t'): + self.needs_version(VERSION_CHAR16CHAR32) + return '_cffi_from_c_%s(%s)' % (cname, var) + else: + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, (model.PointerType, model.FunctionPtrType)): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.ArrayType): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(model.PointerType(tp.item))) + elif isinstance(tp, model.StructOrUnion): + if tp.fldnames is None: + raise TypeError("'%s' is used as %s, but is opaque" % ( + tp._get_c_name(), context)) + return '_cffi_from_c_struct((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.EnumType): + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + else: + raise NotImplementedError(tp) + + # ---------- + # typedefs + + def _typedef_type(self, tp, name): + return self._global_type(tp, "(*(%s *)0)" % (name,)) + + def _generate_cpy_typedef_collecttype(self, tp, name): + self._do_collect_type(self._typedef_type(tp, name)) + + def _generate_cpy_typedef_decl(self, tp, name): + pass + + def _typedef_ctx(self, tp, name): + type_index = self._typesdict[tp] + self._lsts["typename"].append(TypenameExpr(name, type_index)) + + def _generate_cpy_typedef_ctx(self, tp, name): + tp = self._typedef_type(tp, name) + self._typedef_ctx(tp, name) + if getattr(tp, "origin", None) == "unknown_type": + self._struct_ctx(tp, tp.name, approxname=None) + elif isinstance(tp, model.NamedPointerType): + self._struct_ctx(tp.totype, tp.totype.name, approxname=tp.name, + named_ptr=tp) + + # ---------- + # function declarations + + def _generate_cpy_function_collecttype(self, tp, name): + self._do_collect_type(tp.as_raw_function()) + if tp.ellipsis and not self.target_is_python: + self._do_collect_type(tp) + + def _generate_cpy_function_decl(self, tp, name): + assert not self.target_is_python + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + # cannot support vararg functions better than this: check for its + # exact type (including the fixed arguments), and build it as a + # constant function pointer (no CPython wrapper) + self._generate_cpy_constant_decl(tp, name) + return + prnt = self._prnt + numargs = len(tp.args) + if numargs == 0: + argname = 'noarg' + elif numargs == 1: + argname = 'arg0' + else: + argname = 'args' + # + # ------------------------------ + # the 'd' version of the function, only for addressof(lib, 'func') + arguments = [] + call_arguments = [] + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + arguments.append(type.get_c_name(' x%d' % i, context)) + call_arguments.append('x%d' % i) + repr_arguments = ', '.join(arguments) + repr_arguments = repr_arguments or 'void' + if tp.abi: + abi = tp.abi + ' ' + else: + abi = '' + name_and_arguments = '%s_cffi_d_%s(%s)' % (abi, name, repr_arguments) + prnt('static %s' % (tp.result.get_c_name(name_and_arguments),)) + prnt('{') + call_arguments = ', '.join(call_arguments) + result_code = 'return ' + if isinstance(tp.result, model.VoidType): + result_code = '' + prnt(' %s%s(%s);' % (result_code, name, call_arguments)) + prnt('}') + # + prnt('#ifndef PYPY_VERSION') # ------------------------------ + # + prnt('static PyObject *') + prnt('_cffi_f_%s(PyObject *self, PyObject *%s)' % (name, argname)) + prnt('{') + # + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + arg = type.get_c_name(' x%d' % i, context) + prnt(' %s;' % arg) + # + localvars = set() + freelines = set() + for type in tp.args: + self._extra_local_variables(type, localvars, freelines) + for decl in sorted(localvars): + prnt(' %s;' % (decl,)) + # + if not isinstance(tp.result, model.VoidType): + result_code = 'result = ' + context = 'result of %s' % name + result_decl = ' %s;' % tp.result.get_c_name(' result', context) + prnt(result_decl) + prnt(' PyObject *pyresult;') + else: + result_decl = None + result_code = '' + # + if len(tp.args) > 1: + rng = range(len(tp.args)) + for i in rng: + prnt(' PyObject *arg%d;' % i) + prnt() + prnt(' if (!PyArg_UnpackTuple(args, "%s", %d, %d, %s))' % ( + name, len(rng), len(rng), + ', '.join(['&arg%d' % i for i in rng]))) + prnt(' return NULL;') + prnt() + # + for i, type in enumerate(tp.args): + self._convert_funcarg_to_c(type, 'arg%d' % i, 'x%d' % i, + 'return NULL') + prnt() + # + prnt(' Py_BEGIN_ALLOW_THREADS') + prnt(' _cffi_restore_errno();') + call_arguments = ['x%d' % i for i in range(len(tp.args))] + call_arguments = ', '.join(call_arguments) + prnt(' { %s%s(%s); }' % (result_code, name, call_arguments)) + prnt(' _cffi_save_errno();') + prnt(' Py_END_ALLOW_THREADS') + prnt() + # + prnt(' (void)self; /* unused */') + if numargs == 0: + prnt(' (void)noarg; /* unused */') + if result_code: + prnt(' pyresult = %s;' % + self._convert_expr_from_c(tp.result, 'result', 'result type')) + for freeline in freelines: + prnt(' ' + freeline) + prnt(' return pyresult;') + else: + for freeline in freelines: + prnt(' ' + freeline) + prnt(' Py_INCREF(Py_None);') + prnt(' return Py_None;') + prnt('}') + # + prnt('#else') # ------------------------------ + # + # the PyPy version: need to replace struct/union arguments with + # pointers, and if the result is a struct/union, insert a first + # arg that is a pointer to the result. We also do that for + # complex args and return type. + def need_indirection(type): + return (isinstance(type, model.StructOrUnion) or + (isinstance(type, model.PrimitiveType) and + type.is_complex_type())) + difference = False + arguments = [] + call_arguments = [] + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + indirection = '' + if need_indirection(type): + indirection = '*' + difference = True + arg = type.get_c_name(' %sx%d' % (indirection, i), context) + arguments.append(arg) + call_arguments.append('%sx%d' % (indirection, i)) + tp_result = tp.result + if need_indirection(tp_result): + context = 'result of %s' % name + arg = tp_result.get_c_name(' *result', context) + arguments.insert(0, arg) + tp_result = model.void_type + result_decl = None + result_code = '*result = ' + difference = True + if difference: + repr_arguments = ', '.join(arguments) + repr_arguments = repr_arguments or 'void' + name_and_arguments = '%s_cffi_f_%s(%s)' % (abi, name, + repr_arguments) + prnt('static %s' % (tp_result.get_c_name(name_and_arguments),)) + prnt('{') + if result_decl: + prnt(result_decl) + call_arguments = ', '.join(call_arguments) + prnt(' { %s%s(%s); }' % (result_code, name, call_arguments)) + if result_decl: + prnt(' return result;') + prnt('}') + else: + prnt('# define _cffi_f_%s _cffi_d_%s' % (name, name)) + # + prnt('#endif') # ------------------------------ + prnt() + + def _generate_cpy_function_ctx(self, tp, name): + if tp.ellipsis and not self.target_is_python: + self._generate_cpy_constant_ctx(tp, name) + return + type_index = self._typesdict[tp.as_raw_function()] + numargs = len(tp.args) + if self.target_is_python: + meth_kind = OP_DLOPEN_FUNC + elif numargs == 0: + meth_kind = OP_CPYTHON_BLTN_N # 'METH_NOARGS' + elif numargs == 1: + meth_kind = OP_CPYTHON_BLTN_O # 'METH_O' + else: + meth_kind = OP_CPYTHON_BLTN_V # 'METH_VARARGS' + self._lsts["global"].append( + GlobalExpr(name, '_cffi_f_%s' % name, + CffiOp(meth_kind, type_index), + size='_cffi_d_%s' % name)) + + # ---------- + # named structs or unions + + def _field_type(self, tp_struct, field_name, tp_field): + if isinstance(tp_field, model.ArrayType): + actual_length = tp_field.length + if actual_length == '...': + ptr_struct_name = tp_struct.get_c_name('*') + actual_length = '_cffi_array_len(((%s)0)->%s)' % ( + ptr_struct_name, field_name) + tp_item = self._field_type(tp_struct, '%s[0]' % field_name, + tp_field.item) + tp_field = model.ArrayType(tp_item, actual_length) + return tp_field + + def _struct_collecttype(self, tp): + self._do_collect_type(tp) + if self.target_is_python: + # also requires nested anon struct/unions in ABI mode, recursively + for fldtype in tp.anonymous_struct_fields(): + self._struct_collecttype(fldtype) + + def _struct_decl(self, tp, cname, approxname): + if tp.fldtypes is None: + return + prnt = self._prnt + checkfuncname = '_cffi_checkfld_%s' % (approxname,) + prnt('_CFFI_UNUSED_FN') + prnt('static void %s(%s *p)' % (checkfuncname, cname)) + prnt('{') + prnt(' /* only to generate compile-time warnings or errors */') + prnt(' (void)p;') + for fname, ftype, fbitsize, fqual in self._enum_fields(tp): + try: + if ftype.is_integer_type() or fbitsize >= 0: + # accept all integers, but complain on float or double + if fname != '': + prnt(" (void)((p->%s) | 0); /* check that '%s.%s' is " + "an integer */" % (fname, cname, fname)) + continue + # only accept exactly the type declared, except that '[]' + # is interpreted as a '*' and so will match any array length. + # (It would also match '*', but that's harder to detect...) + while (isinstance(ftype, model.ArrayType) + and (ftype.length is None or ftype.length == '...')): + ftype = ftype.item + fname = fname + '[0]' + prnt(' { %s = &p->%s; (void)tmp; }' % ( + ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), + fname)) + except VerificationError as e: + prnt(' /* %s */' % str(e)) # cannot verify it, ignore + prnt('}') + prnt('struct _cffi_align_%s { char x; %s y; };' % (approxname, cname)) + prnt() + + def _struct_ctx(self, tp, cname, approxname, named_ptr=None): + type_index = self._typesdict[tp] + reason_for_not_expanding = None + flags = [] + if isinstance(tp, model.UnionType): + flags.append("_CFFI_F_UNION") + if tp.fldtypes is None: + flags.append("_CFFI_F_OPAQUE") + reason_for_not_expanding = "opaque" + if (tp not in self.ffi._parser._included_declarations and + (named_ptr is None or + named_ptr not in self.ffi._parser._included_declarations)): + if tp.fldtypes is None: + pass # opaque + elif tp.partial or any(tp.anonymous_struct_fields()): + pass # field layout obtained silently from the C compiler + else: + flags.append("_CFFI_F_CHECK_FIELDS") + if tp.packed: + if tp.packed > 1: + raise NotImplementedError( + "%r is declared with 'pack=%r'; only 0 or 1 are " + "supported in API mode (try to use \"...;\", which " + "does not require a 'pack' declaration)" % + (tp, tp.packed)) + flags.append("_CFFI_F_PACKED") + else: + flags.append("_CFFI_F_EXTERNAL") + reason_for_not_expanding = "external" + flags = '|'.join(flags) or '0' + c_fields = [] + if reason_for_not_expanding is None: + enumfields = list(self._enum_fields(tp)) + for fldname, fldtype, fbitsize, fqual in enumfields: + fldtype = self._field_type(tp, fldname, fldtype) + self._check_not_opaque(fldtype, + "field '%s.%s'" % (tp.name, fldname)) + # cname is None for _add_missing_struct_unions() only + op = OP_NOOP + if fbitsize >= 0: + op = OP_BITFIELD + size = '%d /* bits */' % fbitsize + elif cname is None or ( + isinstance(fldtype, model.ArrayType) and + fldtype.length is None): + size = '(size_t)-1' + else: + size = 'sizeof(((%s)0)->%s)' % ( + tp.get_c_name('*') if named_ptr is None + else named_ptr.name, + fldname) + if cname is None or fbitsize >= 0: + offset = '(size_t)-1' + elif named_ptr is not None: + offset = '((char *)&((%s)4096)->%s) - (char *)4096' % ( + named_ptr.name, fldname) + else: + offset = 'offsetof(%s, %s)' % (tp.get_c_name(''), fldname) + c_fields.append( + FieldExpr(fldname, offset, size, fbitsize, + CffiOp(op, self._typesdict[fldtype]))) + first_field_index = len(self._lsts["field"]) + self._lsts["field"].extend(c_fields) + # + if cname is None: # unknown name, for _add_missing_struct_unions + size = '(size_t)-2' + align = -2 + comment = "unnamed" + else: + if named_ptr is not None: + size = 'sizeof(*(%s)0)' % (named_ptr.name,) + align = '-1 /* unknown alignment */' + else: + size = 'sizeof(%s)' % (cname,) + align = 'offsetof(struct _cffi_align_%s, y)' % (approxname,) + comment = None + else: + size = '(size_t)-1' + align = -1 + first_field_index = -1 + comment = reason_for_not_expanding + self._lsts["struct_union"].append( + StructUnionExpr(tp.name, type_index, flags, size, align, comment, + first_field_index, c_fields)) + self._seen_struct_unions.add(tp) + + def _check_not_opaque(self, tp, location): + while isinstance(tp, model.ArrayType): + tp = tp.item + if isinstance(tp, model.StructOrUnion) and tp.fldtypes is None: + raise TypeError( + "%s is of an opaque type (not declared in cdef())" % location) + + def _add_missing_struct_unions(self): + # not very nice, but some struct declarations might be missing + # because they don't have any known C name. Check that they are + # not partial (we can't complete or verify them!) and emit them + # anonymously. + lst = list(self._struct_unions.items()) + lst.sort(key=lambda tp_order: tp_order[1]) + for tp, order in lst: + if tp not in self._seen_struct_unions: + if tp.partial: + raise NotImplementedError("internal inconsistency: %r is " + "partial but was not seen at " + "this point" % (tp,)) + if tp.name.startswith('$') and tp.name[1:].isdigit(): + approxname = tp.name[1:] + elif tp.name == '_IO_FILE' and tp.forcename == 'FILE': + approxname = 'FILE' + self._typedef_ctx(tp, 'FILE') + else: + raise NotImplementedError("internal inconsistency: %r" % + (tp,)) + self._struct_ctx(tp, None, approxname) + + def _generate_cpy_struct_collecttype(self, tp, name): + self._struct_collecttype(tp) + _generate_cpy_union_collecttype = _generate_cpy_struct_collecttype + + def _struct_names(self, tp): + cname = tp.get_c_name('') + if ' ' in cname: + return cname, cname.replace(' ', '_') + else: + return cname, '_' + cname + + def _generate_cpy_struct_decl(self, tp, name): + self._struct_decl(tp, *self._struct_names(tp)) + _generate_cpy_union_decl = _generate_cpy_struct_decl + + def _generate_cpy_struct_ctx(self, tp, name): + self._struct_ctx(tp, *self._struct_names(tp)) + _generate_cpy_union_ctx = _generate_cpy_struct_ctx + + # ---------- + # 'anonymous' declarations. These are produced for anonymous structs + # or unions; the 'name' is obtained by a typedef. + + def _generate_cpy_anonymous_collecttype(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_cpy_enum_collecttype(tp, name) + else: + self._struct_collecttype(tp) + + def _generate_cpy_anonymous_decl(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_cpy_enum_decl(tp) + else: + self._struct_decl(tp, name, 'typedef_' + name) + + def _generate_cpy_anonymous_ctx(self, tp, name): + if isinstance(tp, model.EnumType): + self._enum_ctx(tp, name) + else: + self._struct_ctx(tp, name, 'typedef_' + name) + + # ---------- + # constants, declared with "static const ..." + + def _generate_cpy_const(self, is_int, name, tp=None, category='const', + check_value=None): + if (category, name) in self._seen_constants: + raise VerificationError( + "duplicate declaration of %s '%s'" % (category, name)) + self._seen_constants.add((category, name)) + # + prnt = self._prnt + funcname = '_cffi_%s_%s' % (category, name) + if is_int: + prnt('static int %s(unsigned long long *o)' % funcname) + prnt('{') + prnt(' int n = (%s) <= 0;' % (name,)) + prnt(' *o = (unsigned long long)((%s) | 0);' + ' /* check that %s is an integer */' % (name, name)) + if check_value is not None: + if check_value > 0: + check_value = '%dU' % (check_value,) + prnt(' if (!_cffi_check_int(*o, n, %s))' % (check_value,)) + prnt(' n |= 2;') + prnt(' return n;') + prnt('}') + else: + assert check_value is None + prnt('static void %s(char *o)' % funcname) + prnt('{') + prnt(' *(%s)o = %s;' % (tp.get_c_name('*'), name)) + prnt('}') + prnt() + + def _generate_cpy_constant_collecttype(self, tp, name): + is_int = tp.is_integer_type() + if not is_int or self.target_is_python: + self._do_collect_type(tp) + + def _generate_cpy_constant_decl(self, tp, name): + is_int = tp.is_integer_type() + self._generate_cpy_const(is_int, name, tp) + + def _generate_cpy_constant_ctx(self, tp, name): + if not self.target_is_python and tp.is_integer_type(): + type_op = CffiOp(OP_CONSTANT_INT, -1) + else: + if self.target_is_python: + const_kind = OP_DLOPEN_CONST + else: + const_kind = OP_CONSTANT + type_index = self._typesdict[tp] + type_op = CffiOp(const_kind, type_index) + self._lsts["global"].append( + GlobalExpr(name, '_cffi_const_%s' % name, type_op)) + + # ---------- + # enums + + def _generate_cpy_enum_collecttype(self, tp, name): + self._do_collect_type(tp) + + def _generate_cpy_enum_decl(self, tp, name=None): + for enumerator in tp.enumerators: + self._generate_cpy_const(True, enumerator) + + def _enum_ctx(self, tp, cname): + type_index = self._typesdict[tp] + type_op = CffiOp(OP_ENUM, -1) + if self.target_is_python: + tp.check_not_partial() + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + self._lsts["global"].append( + GlobalExpr(enumerator, '_cffi_const_%s' % enumerator, type_op, + check_value=enumvalue)) + # + if cname is not None and '$' not in cname and not self.target_is_python: + size = "sizeof(%s)" % cname + signed = "((%s)-1) <= 0" % cname + else: + basetp = tp.build_baseinttype(self.ffi, []) + size = self.ffi.sizeof(basetp) + signed = int(int(self.ffi.cast(basetp, -1)) < 0) + allenums = ",".join(tp.enumerators) + self._lsts["enum"].append( + EnumExpr(tp.name, type_index, size, signed, allenums)) + + def _generate_cpy_enum_ctx(self, tp, name): + self._enum_ctx(tp, tp._get_c_name()) + + # ---------- + # macros: for now only for integers + + def _generate_cpy_macro_collecttype(self, tp, name): + pass + + def _generate_cpy_macro_decl(self, tp, name): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + self._generate_cpy_const(True, name, check_value=check_value) + + def _generate_cpy_macro_ctx(self, tp, name): + if tp == '...': + if self.target_is_python: + raise VerificationError( + "cannot use the syntax '...' in '#define %s ...' when " + "using the ABI mode" % (name,)) + check_value = None + else: + check_value = tp # an integer + type_op = CffiOp(OP_CONSTANT_INT, -1) + self._lsts["global"].append( + GlobalExpr(name, '_cffi_const_%s' % name, type_op, + check_value=check_value)) + + # ---------- + # global variables + + def _global_type(self, tp, global_name): + if isinstance(tp, model.ArrayType): + actual_length = tp.length + if actual_length == '...': + actual_length = '_cffi_array_len(%s)' % (global_name,) + tp_item = self._global_type(tp.item, '%s[0]' % global_name) + tp = model.ArrayType(tp_item, actual_length) + return tp + + def _generate_cpy_variable_collecttype(self, tp, name): + self._do_collect_type(self._global_type(tp, name)) + + def _generate_cpy_variable_decl(self, tp, name): + prnt = self._prnt + tp = self._global_type(tp, name) + if isinstance(tp, model.ArrayType) and tp.length is None: + tp = tp.item + ampersand = '' + else: + ampersand = '&' + # This code assumes that casts from "tp *" to "void *" is a + # no-op, i.e. a function that returns a "tp *" can be called + # as if it returned a "void *". This should be generally true + # on any modern machine. The only exception to that rule (on + # uncommon architectures, and as far as I can tell) might be + # if 'tp' were a function type, but that is not possible here. + # (If 'tp' is a function _pointer_ type, then casts from "fn_t + # **" to "void *" are again no-ops, as far as I can tell.) + decl = '*_cffi_var_%s(void)' % (name,) + prnt('static ' + tp.get_c_name(decl, quals=self._current_quals)) + prnt('{') + prnt(' return %s(%s);' % (ampersand, name)) + prnt('}') + prnt() + + def _generate_cpy_variable_ctx(self, tp, name): + tp = self._global_type(tp, name) + type_index = self._typesdict[tp] + if self.target_is_python: + op = OP_GLOBAL_VAR + else: + op = OP_GLOBAL_VAR_F + self._lsts["global"].append( + GlobalExpr(name, '_cffi_var_%s' % name, CffiOp(op, type_index))) + + # ---------- + # extern "Python" + + def _generate_cpy_extern_python_collecttype(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + self._do_collect_type(tp) + _generate_cpy_dllexport_python_collecttype = \ + _generate_cpy_extern_python_plus_c_collecttype = \ + _generate_cpy_extern_python_collecttype + + def _extern_python_decl(self, tp, name, tag_and_space): + prnt = self._prnt + if isinstance(tp.result, model.VoidType): + size_of_result = '0' + else: + context = 'result of %s' % name + size_of_result = '(int)sizeof(%s)' % ( + tp.result.get_c_name('', context),) + prnt('static struct _cffi_externpy_s _cffi_externpy__%s =' % name) + prnt(' { "%s.%s", %s, 0, 0 };' % ( + self.module_name, name, size_of_result)) + prnt() + # + arguments = [] + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + arg = type.get_c_name(' a%d' % i, context) + arguments.append(arg) + # + repr_arguments = ', '.join(arguments) + repr_arguments = repr_arguments or 'void' + name_and_arguments = '%s(%s)' % (name, repr_arguments) + if tp.abi == "__stdcall": + name_and_arguments = '_cffi_stdcall ' + name_and_arguments + # + def may_need_128_bits(tp): + return (isinstance(tp, model.PrimitiveType) and + tp.name == 'long double') + # + size_of_a = max(len(tp.args)*8, 8) + if may_need_128_bits(tp.result): + size_of_a = max(size_of_a, 16) + if isinstance(tp.result, model.StructOrUnion): + size_of_a = 'sizeof(%s) > %d ? sizeof(%s) : %d' % ( + tp.result.get_c_name(''), size_of_a, + tp.result.get_c_name(''), size_of_a) + prnt('%s%s' % (tag_and_space, tp.result.get_c_name(name_and_arguments))) + prnt('{') + prnt(' char a[%s];' % size_of_a) + prnt(' char *p = a;') + for i, type in enumerate(tp.args): + arg = 'a%d' % i + if (isinstance(type, model.StructOrUnion) or + may_need_128_bits(type)): + arg = '&' + arg + type = model.PointerType(type) + prnt(' *(%s)(p + %d) = %s;' % (type.get_c_name('*'), i*8, arg)) + prnt(' _cffi_call_python(&_cffi_externpy__%s, p);' % name) + if not isinstance(tp.result, model.VoidType): + prnt(' return *(%s)p;' % (tp.result.get_c_name('*'),)) + prnt('}') + prnt() + self._num_externpy += 1 + + def _generate_cpy_extern_python_decl(self, tp, name): + self._extern_python_decl(tp, name, 'static ') + + def _generate_cpy_dllexport_python_decl(self, tp, name): + self._extern_python_decl(tp, name, 'CFFI_DLLEXPORT ') + + def _generate_cpy_extern_python_plus_c_decl(self, tp, name): + self._extern_python_decl(tp, name, '') + + def _generate_cpy_extern_python_ctx(self, tp, name): + if self.target_is_python: + raise VerificationError( + "cannot use 'extern \"Python\"' in the ABI mode") + if tp.ellipsis: + raise NotImplementedError("a vararg function is extern \"Python\"") + type_index = self._typesdict[tp] + type_op = CffiOp(OP_EXTERN_PYTHON, type_index) + self._lsts["global"].append( + GlobalExpr(name, '&_cffi_externpy__%s' % name, type_op, name)) + + _generate_cpy_dllexport_python_ctx = \ + _generate_cpy_extern_python_plus_c_ctx = \ + _generate_cpy_extern_python_ctx + + def _print_string_literal_in_array(self, s): + prnt = self._prnt + prnt('// # NB. this is not a string because of a size limit in MSVC') + if not isinstance(s, bytes): # unicode + s = s.encode('utf-8') # -> bytes + else: + s.decode('utf-8') # got bytes, check for valid utf-8 + try: + s.decode('ascii') + except UnicodeDecodeError: + s = b'# -*- encoding: utf8 -*-\n' + s + for line in s.splitlines(True): + comment = line + if type('//') is bytes: # python2 + line = map(ord, line) # make a list of integers + else: # python3 + # type(line) is bytes, which enumerates like a list of integers + comment = ascii(comment)[1:-1] + prnt(('// ' + comment).rstrip()) + printed_line = '' + for c in line: + if len(printed_line) >= 76: + prnt(printed_line) + printed_line = '' + printed_line += '%d,' % (c,) + prnt(printed_line) + + # ---------- + # emitting the opcodes for individual types + + def _emit_bytecode_VoidType(self, tp, index): + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, PRIM_VOID) + + def _emit_bytecode_PrimitiveType(self, tp, index): + prim_index = PRIMITIVE_TO_INDEX[tp.name] + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, prim_index) + + def _emit_bytecode_UnknownIntegerType(self, tp, index): + s = ('_cffi_prim_int(sizeof(%s), (\n' + ' ((%s)-1) | 0 /* check that %s is an integer type */\n' + ' ) <= 0)' % (tp.name, tp.name, tp.name)) + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, s) + + def _emit_bytecode_UnknownFloatType(self, tp, index): + s = ('_cffi_prim_float(sizeof(%s) *\n' + ' (((%s)1) / 2) * 2 /* integer => 0, float => 1 */\n' + ' )' % (tp.name, tp.name)) + self.cffi_types[index] = CffiOp(OP_PRIMITIVE, s) + + def _emit_bytecode_RawFunctionType(self, tp, index): + self.cffi_types[index] = CffiOp(OP_FUNCTION, self._typesdict[tp.result]) + index += 1 + for tp1 in tp.args: + realindex = self._typesdict[tp1] + if index != realindex: + if isinstance(tp1, model.PrimitiveType): + self._emit_bytecode_PrimitiveType(tp1, index) + else: + self.cffi_types[index] = CffiOp(OP_NOOP, realindex) + index += 1 + flags = int(tp.ellipsis) + if tp.abi is not None: + if tp.abi == '__stdcall': + flags |= 2 + else: + raise NotImplementedError("abi=%r" % (tp.abi,)) + self.cffi_types[index] = CffiOp(OP_FUNCTION_END, flags) + + def _emit_bytecode_PointerType(self, tp, index): + self.cffi_types[index] = CffiOp(OP_POINTER, self._typesdict[tp.totype]) + + _emit_bytecode_ConstPointerType = _emit_bytecode_PointerType + _emit_bytecode_NamedPointerType = _emit_bytecode_PointerType + + def _emit_bytecode_FunctionPtrType(self, tp, index): + raw = tp.as_raw_function() + self.cffi_types[index] = CffiOp(OP_POINTER, self._typesdict[raw]) + + def _emit_bytecode_ArrayType(self, tp, index): + item_index = self._typesdict[tp.item] + if tp.length is None: + self.cffi_types[index] = CffiOp(OP_OPEN_ARRAY, item_index) + elif tp.length == '...': + raise VerificationError( + "type %s badly placed: the '...' array length can only be " + "used on global arrays or on fields of structures" % ( + str(tp).replace('/*...*/', '...'),)) + else: + assert self.cffi_types[index + 1] == 'LEN' + self.cffi_types[index] = CffiOp(OP_ARRAY, item_index) + self.cffi_types[index + 1] = CffiOp(None, str(tp.length)) + + def _emit_bytecode_StructType(self, tp, index): + struct_index = self._struct_unions[tp] + self.cffi_types[index] = CffiOp(OP_STRUCT_UNION, struct_index) + _emit_bytecode_UnionType = _emit_bytecode_StructType + + def _emit_bytecode_EnumType(self, tp, index): + enum_index = self._enums[tp] + self.cffi_types[index] = CffiOp(OP_ENUM, enum_index) + + +if sys.version_info >= (3,): + NativeIO = io.StringIO +else: + class NativeIO(io.BytesIO): + def write(self, s): + if isinstance(s, unicode): + s = s.encode('ascii') + super(NativeIO, self).write(s) + +def _is_file_like(maybefile): + # compare to xml.etree.ElementTree._get_writer + return hasattr(maybefile, 'write') + +def _make_c_or_py_source(ffi, module_name, preamble, target_file, verbose): + if verbose: + print("generating %s" % (target_file,)) + recompiler = Recompiler(ffi, module_name, + target_is_python=(preamble is None)) + recompiler.collect_type_table() + recompiler.collect_step_tables() + if _is_file_like(target_file): + recompiler.write_source_to_f(target_file, preamble) + return True + f = NativeIO() + recompiler.write_source_to_f(f, preamble) + output = f.getvalue() + try: + with open(target_file, 'r') as f1: + if f1.read(len(output) + 1) != output: + raise IOError + if verbose: + print("(already up-to-date)") + return False # already up-to-date + except IOError: + tmp_file = '%s.~%d' % (target_file, os.getpid()) + with open(tmp_file, 'w') as f1: + f1.write(output) + try: + os.rename(tmp_file, target_file) + except OSError: + os.unlink(target_file) + os.rename(tmp_file, target_file) + return True + +def make_c_source(ffi, module_name, preamble, target_c_file, verbose=False): + assert preamble is not None + return _make_c_or_py_source(ffi, module_name, preamble, target_c_file, + verbose) + +def make_py_source(ffi, module_name, target_py_file, verbose=False): + return _make_c_or_py_source(ffi, module_name, None, target_py_file, + verbose) + +def _modname_to_file(outputdir, modname, extension): + parts = modname.split('.') + try: + os.makedirs(os.path.join(outputdir, *parts[:-1])) + except OSError: + pass + parts[-1] += extension + return os.path.join(outputdir, *parts), parts + + +# Aaargh. Distutils is not tested at all for the purpose of compiling +# DLLs that are not extension modules. Here are some hacks to work +# around that, in the _patch_for_*() functions... + +def _patch_meth(patchlist, cls, name, new_meth): + old = getattr(cls, name) + patchlist.append((cls, name, old)) + setattr(cls, name, new_meth) + return old + +def _unpatch_meths(patchlist): + for cls, name, old_meth in reversed(patchlist): + setattr(cls, name, old_meth) + +def _patch_for_embedding(patchlist): + if sys.platform == 'win32': + # we must not remove the manifest when building for embedding! + # FUTURE: this module was removed in setuptools 74; this is likely dead code and should be removed, + # since the toolchain it supports (VS2005-2008) is also long dead. + from cffi._shimmed_dist_utils import MSVCCompiler + if MSVCCompiler is not None: + _patch_meth(patchlist, MSVCCompiler, '_remove_visual_c_ref', + lambda self, manifest_file: manifest_file) + + if sys.platform == 'darwin': + # we must not make a '-bundle', but a '-dynamiclib' instead + from cffi._shimmed_dist_utils import CCompiler + def my_link_shared_object(self, *args, **kwds): + if '-bundle' in self.linker_so: + self.linker_so = list(self.linker_so) + i = self.linker_so.index('-bundle') + self.linker_so[i] = '-dynamiclib' + return old_link_shared_object(self, *args, **kwds) + old_link_shared_object = _patch_meth(patchlist, CCompiler, + 'link_shared_object', + my_link_shared_object) + +def _patch_for_target(patchlist, target): + from cffi._shimmed_dist_utils import build_ext + # if 'target' is different from '*', we need to patch some internal + # method to just return this 'target' value, instead of having it + # built from module_name + if target.endswith('.*'): + target = target[:-2] + if sys.platform == 'win32': + target += '.dll' + elif sys.platform == 'darwin': + target += '.dylib' + else: + target += '.so' + _patch_meth(patchlist, build_ext, 'get_ext_filename', + lambda self, ext_name: target) + + +def recompile(ffi, module_name, preamble, tmpdir='.', call_c_compiler=True, + c_file=None, source_extension='.c', extradir=None, + compiler_verbose=1, target=None, debug=None, + uses_ffiplatform=True, **kwds): + if not isinstance(module_name, str): + module_name = module_name.encode('ascii') + if ffi._windows_unicode: + ffi._apply_windows_unicode(kwds) + if preamble is not None: + if call_c_compiler and _is_file_like(c_file): + raise TypeError("Writing to file-like objects is not supported " + "with call_c_compiler=True") + embedding = (ffi._embedding is not None) + if embedding: + ffi._apply_embedding_fix(kwds) + if c_file is None: + c_file, parts = _modname_to_file(tmpdir, module_name, + source_extension) + if extradir: + parts = [extradir] + parts + ext_c_file = os.path.join(*parts) + else: + ext_c_file = c_file + # + if target is None: + if embedding: + target = '%s.*' % module_name + else: + target = '*' + # + if uses_ffiplatform: + ext = ffiplatform.get_extension(ext_c_file, module_name, **kwds) + else: + ext = None + updated = make_c_source(ffi, module_name, preamble, c_file, + verbose=compiler_verbose) + if call_c_compiler: + patchlist = [] + cwd = os.getcwd() + try: + if embedding: + _patch_for_embedding(patchlist) + if target != '*': + _patch_for_target(patchlist, target) + if compiler_verbose: + if tmpdir == '.': + msg = 'the current directory is' + else: + msg = 'setting the current directory to' + print('%s %r' % (msg, os.path.abspath(tmpdir))) + os.chdir(tmpdir) + outputfilename = ffiplatform.compile('.', ext, + compiler_verbose, debug) + finally: + os.chdir(cwd) + _unpatch_meths(patchlist) + return outputfilename + else: + return ext, updated + else: + if c_file is None: + c_file, _ = _modname_to_file(tmpdir, module_name, '.py') + updated = make_py_source(ffi, module_name, c_file, + verbose=compiler_verbose) + if call_c_compiler: + return c_file + else: + return None, updated + diff --git a/templates/skills/file_manager/dependencies/cffi/setuptools_ext.py b/templates/skills/file_manager/dependencies/cffi/setuptools_ext.py new file mode 100644 index 00000000..681b49d7 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/setuptools_ext.py @@ -0,0 +1,216 @@ +import os +import sys + +try: + basestring +except NameError: + # Python 3.x + basestring = str + +def error(msg): + from cffi._shimmed_dist_utils import DistutilsSetupError + raise DistutilsSetupError(msg) + + +def execfile(filename, glob): + # We use execfile() (here rewritten for Python 3) instead of + # __import__() to load the build script. The problem with + # a normal import is that in some packages, the intermediate + # __init__.py files may already try to import the file that + # we are generating. + with open(filename) as f: + src = f.read() + src += '\n' # Python 2.6 compatibility + code = compile(src, filename, 'exec') + exec(code, glob, glob) + + +def add_cffi_module(dist, mod_spec): + from cffi.api import FFI + + if not isinstance(mod_spec, basestring): + error("argument to 'cffi_modules=...' must be a str or a list of str," + " not %r" % (type(mod_spec).__name__,)) + mod_spec = str(mod_spec) + try: + build_file_name, ffi_var_name = mod_spec.split(':') + except ValueError: + error("%r must be of the form 'path/build.py:ffi_variable'" % + (mod_spec,)) + if not os.path.exists(build_file_name): + ext = '' + rewritten = build_file_name.replace('.', '/') + '.py' + if os.path.exists(rewritten): + ext = ' (rewrite cffi_modules to [%r])' % ( + rewritten + ':' + ffi_var_name,) + error("%r does not name an existing file%s" % (build_file_name, ext)) + + mod_vars = {'__name__': '__cffi__', '__file__': build_file_name} + execfile(build_file_name, mod_vars) + + try: + ffi = mod_vars[ffi_var_name] + except KeyError: + error("%r: object %r not found in module" % (mod_spec, + ffi_var_name)) + if not isinstance(ffi, FFI): + ffi = ffi() # maybe it's a function instead of directly an ffi + if not isinstance(ffi, FFI): + error("%r is not an FFI instance (got %r)" % (mod_spec, + type(ffi).__name__)) + if not hasattr(ffi, '_assigned_source'): + error("%r: the set_source() method was not called" % (mod_spec,)) + module_name, source, source_extension, kwds = ffi._assigned_source + if ffi._windows_unicode: + kwds = kwds.copy() + ffi._apply_windows_unicode(kwds) + + if source is None: + _add_py_module(dist, ffi, module_name) + else: + _add_c_module(dist, ffi, module_name, source, source_extension, kwds) + +def _set_py_limited_api(Extension, kwds): + """ + Add py_limited_api to kwds if setuptools >= 26 is in use. + Do not alter the setting if it already exists. + Setuptools takes care of ignoring the flag on Python 2 and PyPy. + + CPython itself should ignore the flag in a debugging version + (by not listing .abi3.so in the extensions it supports), but + it doesn't so far, creating troubles. That's why we check + for "not hasattr(sys, 'gettotalrefcount')" (the 2.7 compatible equivalent + of 'd' not in sys.abiflags). (http://bugs.python.org/issue28401) + + On Windows, with CPython <= 3.4, it's better not to use py_limited_api + because virtualenv *still* doesn't copy PYTHON3.DLL on these versions. + Recently (2020) we started shipping only >= 3.5 wheels, though. So + we'll give it another try and set py_limited_api on Windows >= 3.5. + """ + from cffi import recompiler + + if ('py_limited_api' not in kwds and not hasattr(sys, 'gettotalrefcount') + and recompiler.USE_LIMITED_API): + import setuptools + try: + setuptools_major_version = int(setuptools.__version__.partition('.')[0]) + if setuptools_major_version >= 26: + kwds['py_limited_api'] = True + except ValueError: # certain development versions of setuptools + # If we don't know the version number of setuptools, we + # try to set 'py_limited_api' anyway. At worst, we get a + # warning. + kwds['py_limited_api'] = True + return kwds + +def _add_c_module(dist, ffi, module_name, source, source_extension, kwds): + # We are a setuptools extension. Need this build_ext for py_limited_api. + from setuptools.command.build_ext import build_ext + from cffi._shimmed_dist_utils import Extension, log, mkpath + from cffi import recompiler + + allsources = ['$PLACEHOLDER'] + allsources.extend(kwds.pop('sources', [])) + kwds = _set_py_limited_api(Extension, kwds) + ext = Extension(name=module_name, sources=allsources, **kwds) + + def make_mod(tmpdir, pre_run=None): + c_file = os.path.join(tmpdir, module_name + source_extension) + log.info("generating cffi module %r" % c_file) + mkpath(tmpdir) + # a setuptools-only, API-only hook: called with the "ext" and "ffi" + # arguments just before we turn the ffi into C code. To use it, + # subclass the 'distutils.command.build_ext.build_ext' class and + # add a method 'def pre_run(self, ext, ffi)'. + if pre_run is not None: + pre_run(ext, ffi) + updated = recompiler.make_c_source(ffi, module_name, source, c_file) + if not updated: + log.info("already up-to-date") + return c_file + + if dist.ext_modules is None: + dist.ext_modules = [] + dist.ext_modules.append(ext) + + base_class = dist.cmdclass.get('build_ext', build_ext) + class build_ext_make_mod(base_class): + def run(self): + if ext.sources[0] == '$PLACEHOLDER': + pre_run = getattr(self, 'pre_run', None) + ext.sources[0] = make_mod(self.build_temp, pre_run) + base_class.run(self) + dist.cmdclass['build_ext'] = build_ext_make_mod + # NB. multiple runs here will create multiple 'build_ext_make_mod' + # classes. Even in this case the 'build_ext' command should be + # run once; but just in case, the logic above does nothing if + # called again. + + +def _add_py_module(dist, ffi, module_name): + from setuptools.command.build_py import build_py + from setuptools.command.build_ext import build_ext + from cffi._shimmed_dist_utils import log, mkpath + from cffi import recompiler + + def generate_mod(py_file): + log.info("generating cffi module %r" % py_file) + mkpath(os.path.dirname(py_file)) + updated = recompiler.make_py_source(ffi, module_name, py_file) + if not updated: + log.info("already up-to-date") + + base_class = dist.cmdclass.get('build_py', build_py) + class build_py_make_mod(base_class): + def run(self): + base_class.run(self) + module_path = module_name.split('.') + module_path[-1] += '.py' + generate_mod(os.path.join(self.build_lib, *module_path)) + def get_source_files(self): + # This is called from 'setup.py sdist' only. Exclude + # the generate .py module in this case. + saved_py_modules = self.py_modules + try: + if saved_py_modules: + self.py_modules = [m for m in saved_py_modules + if m != module_name] + return base_class.get_source_files(self) + finally: + self.py_modules = saved_py_modules + dist.cmdclass['build_py'] = build_py_make_mod + + # distutils and setuptools have no notion I could find of a + # generated python module. If we don't add module_name to + # dist.py_modules, then things mostly work but there are some + # combination of options (--root and --record) that will miss + # the module. So we add it here, which gives a few apparently + # harmless warnings about not finding the file outside the + # build directory. + # Then we need to hack more in get_source_files(); see above. + if dist.py_modules is None: + dist.py_modules = [] + dist.py_modules.append(module_name) + + # the following is only for "build_ext -i" + base_class_2 = dist.cmdclass.get('build_ext', build_ext) + class build_ext_make_mod(base_class_2): + def run(self): + base_class_2.run(self) + if self.inplace: + # from get_ext_fullpath() in distutils/command/build_ext.py + module_path = module_name.split('.') + package = '.'.join(module_path[:-1]) + build_py = self.get_finalized_command('build_py') + package_dir = build_py.get_package_dir(package) + file_name = module_path[-1] + '.py' + generate_mod(os.path.join(package_dir, file_name)) + dist.cmdclass['build_ext'] = build_ext_make_mod + +def cffi_modules(dist, attr, value): + assert attr == 'cffi_modules' + if isinstance(value, basestring): + value = [value] + + for cffi_module in value: + add_cffi_module(dist, cffi_module) diff --git a/templates/skills/file_manager/dependencies/cffi/vengine_cpy.py b/templates/skills/file_manager/dependencies/cffi/vengine_cpy.py new file mode 100644 index 00000000..eb0b6f70 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/vengine_cpy.py @@ -0,0 +1,1084 @@ +# +# DEPRECATED: implementation for ffi.verify() +# +import sys +from . import model +from .error import VerificationError +from . import _imp_emulation as imp + + +class VCPythonEngine(object): + _class_key = 'x' + _gen_python_module = True + + def __init__(self, verifier): + self.verifier = verifier + self.ffi = verifier.ffi + self._struct_pending_verification = {} + self._types_of_builtin_functions = {} + + def patch_extension_kwds(self, kwds): + pass + + def find_module(self, module_name, path, so_suffixes): + try: + f, filename, descr = imp.find_module(module_name, path) + except ImportError: + return None + if f is not None: + f.close() + # Note that after a setuptools installation, there are both .py + # and .so files with the same basename. The code here relies on + # imp.find_module() locating the .so in priority. + if descr[0] not in so_suffixes: + return None + return filename + + def collect_types(self): + self._typesdict = {} + self._generate("collecttype") + + def _prnt(self, what=''): + self._f.write(what + '\n') + + def _gettypenum(self, type): + # a KeyError here is a bug. please report it! :-) + return self._typesdict[type] + + def _do_collect_type(self, tp): + if ((not isinstance(tp, model.PrimitiveType) + or tp.name == 'long double') + and tp not in self._typesdict): + num = len(self._typesdict) + self._typesdict[tp] = num + + def write_source_to_f(self): + self.collect_types() + # + # The new module will have a _cffi_setup() function that receives + # objects from the ffi world, and that calls some setup code in + # the module. This setup code is split in several independent + # functions, e.g. one per constant. The functions are "chained" + # by ending in a tail call to each other. + # + # This is further split in two chained lists, depending on if we + # can do it at import-time or if we must wait for _cffi_setup() to + # provide us with the objects. This is needed because we + # need the values of the enum constants in order to build the + # that we may have to pass to _cffi_setup(). + # + # The following two 'chained_list_constants' items contains + # the head of these two chained lists, as a string that gives the + # call to do, if any. + self._chained_list_constants = ['((void)lib,0)', '((void)lib,0)'] + # + prnt = self._prnt + # first paste some standard set of lines that are mostly '#define' + prnt(cffimod_header) + prnt() + # then paste the C source given by the user, verbatim. + prnt(self.verifier.preamble) + prnt() + # + # call generate_cpy_xxx_decl(), for every xxx found from + # ffi._parser._declarations. This generates all the functions. + self._generate("decl") + # + # implement the function _cffi_setup_custom() as calling the + # head of the chained list. + self._generate_setup_custom() + prnt() + # + # produce the method table, including the entries for the + # generated Python->C function wrappers, which are done + # by generate_cpy_function_method(). + prnt('static PyMethodDef _cffi_methods[] = {') + self._generate("method") + prnt(' {"_cffi_setup", _cffi_setup, METH_VARARGS, NULL},') + prnt(' {NULL, NULL, 0, NULL} /* Sentinel */') + prnt('};') + prnt() + # + # standard init. + modname = self.verifier.get_module_name() + constants = self._chained_list_constants[False] + prnt('#if PY_MAJOR_VERSION >= 3') + prnt() + prnt('static struct PyModuleDef _cffi_module_def = {') + prnt(' PyModuleDef_HEAD_INIT,') + prnt(' "%s",' % modname) + prnt(' NULL,') + prnt(' -1,') + prnt(' _cffi_methods,') + prnt(' NULL, NULL, NULL, NULL') + prnt('};') + prnt() + prnt('PyMODINIT_FUNC') + prnt('PyInit_%s(void)' % modname) + prnt('{') + prnt(' PyObject *lib;') + prnt(' lib = PyModule_Create(&_cffi_module_def);') + prnt(' if (lib == NULL)') + prnt(' return NULL;') + prnt(' if (%s < 0 || _cffi_init() < 0) {' % (constants,)) + prnt(' Py_DECREF(lib);') + prnt(' return NULL;') + prnt(' }') + prnt(' return lib;') + prnt('}') + prnt() + prnt('#else') + prnt() + prnt('PyMODINIT_FUNC') + prnt('init%s(void)' % modname) + prnt('{') + prnt(' PyObject *lib;') + prnt(' lib = Py_InitModule("%s", _cffi_methods);' % modname) + prnt(' if (lib == NULL)') + prnt(' return;') + prnt(' if (%s < 0 || _cffi_init() < 0)' % (constants,)) + prnt(' return;') + prnt(' return;') + prnt('}') + prnt() + prnt('#endif') + + def load_library(self, flags=None): + # XXX review all usages of 'self' here! + # import it as a new extension module + imp.acquire_lock() + try: + if hasattr(sys, "getdlopenflags"): + previous_flags = sys.getdlopenflags() + try: + if hasattr(sys, "setdlopenflags") and flags is not None: + sys.setdlopenflags(flags) + module = imp.load_dynamic(self.verifier.get_module_name(), + self.verifier.modulefilename) + except ImportError as e: + error = "importing %r: %s" % (self.verifier.modulefilename, e) + raise VerificationError(error) + finally: + if hasattr(sys, "setdlopenflags"): + sys.setdlopenflags(previous_flags) + finally: + imp.release_lock() + # + # call loading_cpy_struct() to get the struct layout inferred by + # the C compiler + self._load(module, 'loading') + # + # the C code will need the objects. Collect them in + # order in a list. + revmapping = dict([(value, key) + for (key, value) in self._typesdict.items()]) + lst = [revmapping[i] for i in range(len(revmapping))] + lst = list(map(self.ffi._get_cached_btype, lst)) + # + # build the FFILibrary class and instance and call _cffi_setup(). + # this will set up some fields like '_cffi_types', and only then + # it will invoke the chained list of functions that will really + # build (notably) the constant objects, as if they are + # pointers, and store them as attributes on the 'library' object. + class FFILibrary(object): + _cffi_python_module = module + _cffi_ffi = self.ffi + _cffi_dir = [] + def __dir__(self): + return FFILibrary._cffi_dir + list(self.__dict__) + library = FFILibrary() + if module._cffi_setup(lst, VerificationError, library): + import warnings + warnings.warn("reimporting %r might overwrite older definitions" + % (self.verifier.get_module_name())) + # + # finally, call the loaded_cpy_xxx() functions. This will perform + # the final adjustments, like copying the Python->C wrapper + # functions from the module to the 'library' object, and setting + # up the FFILibrary class with properties for the global C variables. + self._load(module, 'loaded', library=library) + module._cffi_original_ffi = self.ffi + module._cffi_types_of_builtin_funcs = self._types_of_builtin_functions + return library + + def _get_declarations(self): + lst = [(key, tp) for (key, (tp, qual)) in + self.ffi._parser._declarations.items()] + lst.sort() + return lst + + def _generate(self, step_name): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + try: + method = getattr(self, '_generate_cpy_%s_%s' % (kind, + step_name)) + except AttributeError: + raise VerificationError( + "not implemented in verify(): %r" % name) + try: + method(tp, realname) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _load(self, module, step_name, **kwds): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + method = getattr(self, '_%s_cpy_%s' % (step_name, kind)) + try: + method(tp, realname, module, **kwds) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _generate_nothing(self, tp, name): + pass + + def _loaded_noop(self, tp, name, module, **kwds): + pass + + # ---------- + + def _convert_funcarg_to_c(self, tp, fromvar, tovar, errcode): + extraarg = '' + if isinstance(tp, model.PrimitiveType): + if tp.is_integer_type() and tp.name != '_Bool': + converter = '_cffi_to_c_int' + extraarg = ', %s' % tp.name + elif tp.is_complex_type(): + raise VerificationError( + "not implemented in verify(): complex types") + else: + converter = '(%s)_cffi_to_c_%s' % (tp.get_c_name(''), + tp.name.replace(' ', '_')) + errvalue = '-1' + # + elif isinstance(tp, model.PointerType): + self._convert_funcarg_to_c_ptr_or_array(tp, fromvar, + tovar, errcode) + return + # + elif isinstance(tp, (model.StructOrUnion, model.EnumType)): + # a struct (not a struct pointer) as a function argument + self._prnt(' if (_cffi_to_c((char *)&%s, _cffi_type(%d), %s) < 0)' + % (tovar, self._gettypenum(tp), fromvar)) + self._prnt(' %s;' % errcode) + return + # + elif isinstance(tp, model.FunctionPtrType): + converter = '(%s)_cffi_to_c_pointer' % tp.get_c_name('') + extraarg = ', _cffi_type(%d)' % self._gettypenum(tp) + errvalue = 'NULL' + # + else: + raise NotImplementedError(tp) + # + self._prnt(' %s = %s(%s%s);' % (tovar, converter, fromvar, extraarg)) + self._prnt(' if (%s == (%s)%s && PyErr_Occurred())' % ( + tovar, tp.get_c_name(''), errvalue)) + self._prnt(' %s;' % errcode) + + def _extra_local_variables(self, tp, localvars, freelines): + if isinstance(tp, model.PointerType): + localvars.add('Py_ssize_t datasize') + localvars.add('struct _cffi_freeme_s *large_args_free = NULL') + freelines.add('if (large_args_free != NULL)' + ' _cffi_free_array_arguments(large_args_free);') + + def _convert_funcarg_to_c_ptr_or_array(self, tp, fromvar, tovar, errcode): + self._prnt(' datasize = _cffi_prepare_pointer_call_argument(') + self._prnt(' _cffi_type(%d), %s, (char **)&%s);' % ( + self._gettypenum(tp), fromvar, tovar)) + self._prnt(' if (datasize != 0) {') + self._prnt(' %s = ((size_t)datasize) <= 640 ? ' + 'alloca((size_t)datasize) : NULL;' % (tovar,)) + self._prnt(' if (_cffi_convert_array_argument(_cffi_type(%d), %s, ' + '(char **)&%s,' % (self._gettypenum(tp), fromvar, tovar)) + self._prnt(' datasize, &large_args_free) < 0)') + self._prnt(' %s;' % errcode) + self._prnt(' }') + + def _convert_expr_from_c(self, tp, var, context): + if isinstance(tp, model.PrimitiveType): + if tp.is_integer_type() and tp.name != '_Bool': + return '_cffi_from_c_int(%s, %s)' % (var, tp.name) + elif tp.name != 'long double': + return '_cffi_from_c_%s(%s)' % (tp.name.replace(' ', '_'), var) + else: + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, (model.PointerType, model.FunctionPtrType)): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.ArrayType): + return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( + var, self._gettypenum(model.PointerType(tp.item))) + elif isinstance(tp, model.StructOrUnion): + if tp.fldnames is None: + raise TypeError("'%s' is used as %s, but is opaque" % ( + tp._get_c_name(), context)) + return '_cffi_from_c_struct((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + elif isinstance(tp, model.EnumType): + return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( + var, self._gettypenum(tp)) + else: + raise NotImplementedError(tp) + + # ---------- + # typedefs: generates no code so far + + _generate_cpy_typedef_collecttype = _generate_nothing + _generate_cpy_typedef_decl = _generate_nothing + _generate_cpy_typedef_method = _generate_nothing + _loading_cpy_typedef = _loaded_noop + _loaded_cpy_typedef = _loaded_noop + + # ---------- + # function declarations + + def _generate_cpy_function_collecttype(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + self._do_collect_type(tp) + else: + # don't call _do_collect_type(tp) in this common case, + # otherwise test_autofilled_struct_as_argument fails + for type in tp.args: + self._do_collect_type(type) + self._do_collect_type(tp.result) + + def _generate_cpy_function_decl(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + # cannot support vararg functions better than this: check for its + # exact type (including the fixed arguments), and build it as a + # constant function pointer (no CPython wrapper) + self._generate_cpy_const(False, name, tp) + return + prnt = self._prnt + numargs = len(tp.args) + if numargs == 0: + argname = 'noarg' + elif numargs == 1: + argname = 'arg0' + else: + argname = 'args' + prnt('static PyObject *') + prnt('_cffi_f_%s(PyObject *self, PyObject *%s)' % (name, argname)) + prnt('{') + # + context = 'argument of %s' % name + for i, type in enumerate(tp.args): + prnt(' %s;' % type.get_c_name(' x%d' % i, context)) + # + localvars = set() + freelines = set() + for type in tp.args: + self._extra_local_variables(type, localvars, freelines) + for decl in sorted(localvars): + prnt(' %s;' % (decl,)) + # + if not isinstance(tp.result, model.VoidType): + result_code = 'result = ' + context = 'result of %s' % name + prnt(' %s;' % tp.result.get_c_name(' result', context)) + prnt(' PyObject *pyresult;') + else: + result_code = '' + # + if len(tp.args) > 1: + rng = range(len(tp.args)) + for i in rng: + prnt(' PyObject *arg%d;' % i) + prnt() + prnt(' if (!PyArg_ParseTuple(args, "%s:%s", %s))' % ( + 'O' * numargs, name, ', '.join(['&arg%d' % i for i in rng]))) + prnt(' return NULL;') + prnt() + # + for i, type in enumerate(tp.args): + self._convert_funcarg_to_c(type, 'arg%d' % i, 'x%d' % i, + 'return NULL') + prnt() + # + prnt(' Py_BEGIN_ALLOW_THREADS') + prnt(' _cffi_restore_errno();') + prnt(' { %s%s(%s); }' % ( + result_code, name, + ', '.join(['x%d' % i for i in range(len(tp.args))]))) + prnt(' _cffi_save_errno();') + prnt(' Py_END_ALLOW_THREADS') + prnt() + # + prnt(' (void)self; /* unused */') + if numargs == 0: + prnt(' (void)noarg; /* unused */') + if result_code: + prnt(' pyresult = %s;' % + self._convert_expr_from_c(tp.result, 'result', 'result type')) + for freeline in freelines: + prnt(' ' + freeline) + prnt(' return pyresult;') + else: + for freeline in freelines: + prnt(' ' + freeline) + prnt(' Py_INCREF(Py_None);') + prnt(' return Py_None;') + prnt('}') + prnt() + + def _generate_cpy_function_method(self, tp, name): + if tp.ellipsis: + return + numargs = len(tp.args) + if numargs == 0: + meth = 'METH_NOARGS' + elif numargs == 1: + meth = 'METH_O' + else: + meth = 'METH_VARARGS' + self._prnt(' {"%s", _cffi_f_%s, %s, NULL},' % (name, name, meth)) + + _loading_cpy_function = _loaded_noop + + def _loaded_cpy_function(self, tp, name, module, library): + if tp.ellipsis: + return + func = getattr(module, name) + setattr(library, name, func) + self._types_of_builtin_functions[func] = tp + + # ---------- + # named structs + + _generate_cpy_struct_collecttype = _generate_nothing + def _generate_cpy_struct_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'struct', name) + def _generate_cpy_struct_method(self, tp, name): + self._generate_struct_or_union_method(tp, 'struct', name) + def _loading_cpy_struct(self, tp, name, module): + self._loading_struct_or_union(tp, 'struct', name, module) + def _loaded_cpy_struct(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + _generate_cpy_union_collecttype = _generate_nothing + def _generate_cpy_union_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'union', name) + def _generate_cpy_union_method(self, tp, name): + self._generate_struct_or_union_method(tp, 'union', name) + def _loading_cpy_union(self, tp, name, module): + self._loading_struct_or_union(tp, 'union', name, module) + def _loaded_cpy_union(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + def _generate_struct_or_union_decl(self, tp, prefix, name): + if tp.fldnames is None: + return # nothing to do with opaque structs + checkfuncname = '_cffi_check_%s_%s' % (prefix, name) + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + cname = ('%s %s' % (prefix, name)).strip() + # + prnt = self._prnt + prnt('static void %s(%s *p)' % (checkfuncname, cname)) + prnt('{') + prnt(' /* only to generate compile-time warnings or errors */') + prnt(' (void)p;') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if (isinstance(ftype, model.PrimitiveType) + and ftype.is_integer_type()) or fbitsize >= 0: + # accept all integers, but complain on float or double + prnt(' (void)((p->%s) << 1);' % fname) + else: + # only accept exactly the type declared. + try: + prnt(' { %s = &p->%s; (void)tmp; }' % ( + ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), + fname)) + except VerificationError as e: + prnt(' /* %s */' % str(e)) # cannot verify it, ignore + prnt('}') + prnt('static PyObject *') + prnt('%s(PyObject *self, PyObject *noarg)' % (layoutfuncname,)) + prnt('{') + prnt(' struct _cffi_aligncheck { char x; %s y; };' % cname) + prnt(' static Py_ssize_t nums[] = {') + prnt(' sizeof(%s),' % cname) + prnt(' offsetof(struct _cffi_aligncheck, y),') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + prnt(' offsetof(%s, %s),' % (cname, fname)) + if isinstance(ftype, model.ArrayType) and ftype.length is None: + prnt(' 0, /* %s */' % ftype._get_c_name()) + else: + prnt(' sizeof(((%s *)0)->%s),' % (cname, fname)) + prnt(' -1') + prnt(' };') + prnt(' (void)self; /* unused */') + prnt(' (void)noarg; /* unused */') + prnt(' return _cffi_get_struct_layout(nums);') + prnt(' /* the next line is not executed, but compiled */') + prnt(' %s(0);' % (checkfuncname,)) + prnt('}') + prnt() + + def _generate_struct_or_union_method(self, tp, prefix, name): + if tp.fldnames is None: + return # nothing to do with opaque structs + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + self._prnt(' {"%s", %s, METH_NOARGS, NULL},' % (layoutfuncname, + layoutfuncname)) + + def _loading_struct_or_union(self, tp, prefix, name, module): + if tp.fldnames is None: + return # nothing to do with opaque structs + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + # + function = getattr(module, layoutfuncname) + layout = function() + if isinstance(tp, model.StructOrUnion) and tp.partial: + # use the function()'s sizes and offsets to guide the + # layout of the struct + totalsize = layout[0] + totalalignment = layout[1] + fieldofs = layout[2::2] + fieldsize = layout[3::2] + tp.force_flatten() + assert len(fieldofs) == len(fieldsize) == len(tp.fldnames) + tp.fixedlayout = fieldofs, fieldsize, totalsize, totalalignment + else: + cname = ('%s %s' % (prefix, name)).strip() + self._struct_pending_verification[tp] = layout, cname + + def _loaded_struct_or_union(self, tp): + if tp.fldnames is None: + return # nothing to do with opaque structs + self.ffi._get_cached_btype(tp) # force 'fixedlayout' to be considered + + if tp in self._struct_pending_verification: + # check that the layout sizes and offsets match the real ones + def check(realvalue, expectedvalue, msg): + if realvalue != expectedvalue: + raise VerificationError( + "%s (we have %d, but C compiler says %d)" + % (msg, expectedvalue, realvalue)) + ffi = self.ffi + BStruct = ffi._get_cached_btype(tp) + layout, cname = self._struct_pending_verification.pop(tp) + check(layout[0], ffi.sizeof(BStruct), "wrong total size") + check(layout[1], ffi.alignof(BStruct), "wrong total alignment") + i = 2 + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + check(layout[i], ffi.offsetof(BStruct, fname), + "wrong offset for field %r" % (fname,)) + if layout[i+1] != 0: + BField = ffi._get_cached_btype(ftype) + check(layout[i+1], ffi.sizeof(BField), + "wrong size for field %r" % (fname,)) + i += 2 + assert i == len(layout) + + # ---------- + # 'anonymous' declarations. These are produced for anonymous structs + # or unions; the 'name' is obtained by a typedef. + + _generate_cpy_anonymous_collecttype = _generate_nothing + + def _generate_cpy_anonymous_decl(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_cpy_enum_decl(tp, name, '') + else: + self._generate_struct_or_union_decl(tp, '', name) + + def _generate_cpy_anonymous_method(self, tp, name): + if not isinstance(tp, model.EnumType): + self._generate_struct_or_union_method(tp, '', name) + + def _loading_cpy_anonymous(self, tp, name, module): + if isinstance(tp, model.EnumType): + self._loading_cpy_enum(tp, name, module) + else: + self._loading_struct_or_union(tp, '', name, module) + + def _loaded_cpy_anonymous(self, tp, name, module, **kwds): + if isinstance(tp, model.EnumType): + self._loaded_cpy_enum(tp, name, module, **kwds) + else: + self._loaded_struct_or_union(tp) + + # ---------- + # constants, likely declared with '#define' + + def _generate_cpy_const(self, is_int, name, tp=None, category='const', + vartp=None, delayed=True, size_too=False, + check_value=None): + prnt = self._prnt + funcname = '_cffi_%s_%s' % (category, name) + prnt('static int %s(PyObject *lib)' % funcname) + prnt('{') + prnt(' PyObject *o;') + prnt(' int res;') + if not is_int: + prnt(' %s;' % (vartp or tp).get_c_name(' i', name)) + else: + assert category == 'const' + # + if check_value is not None: + self._check_int_constant_value(name, check_value) + # + if not is_int: + if category == 'var': + realexpr = '&' + name + else: + realexpr = name + prnt(' i = (%s);' % (realexpr,)) + prnt(' o = %s;' % (self._convert_expr_from_c(tp, 'i', + 'variable type'),)) + assert delayed + else: + prnt(' o = _cffi_from_c_int_const(%s);' % name) + prnt(' if (o == NULL)') + prnt(' return -1;') + if size_too: + prnt(' {') + prnt(' PyObject *o1 = o;') + prnt(' o = Py_BuildValue("On", o1, (Py_ssize_t)sizeof(%s));' + % (name,)) + prnt(' Py_DECREF(o1);') + prnt(' if (o == NULL)') + prnt(' return -1;') + prnt(' }') + prnt(' res = PyObject_SetAttrString(lib, "%s", o);' % name) + prnt(' Py_DECREF(o);') + prnt(' if (res < 0)') + prnt(' return -1;') + prnt(' return %s;' % self._chained_list_constants[delayed]) + self._chained_list_constants[delayed] = funcname + '(lib)' + prnt('}') + prnt() + + def _generate_cpy_constant_collecttype(self, tp, name): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + if not is_int: + self._do_collect_type(tp) + + def _generate_cpy_constant_decl(self, tp, name): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + self._generate_cpy_const(is_int, name, tp) + + _generate_cpy_constant_method = _generate_nothing + _loading_cpy_constant = _loaded_noop + _loaded_cpy_constant = _loaded_noop + + # ---------- + # enums + + def _check_int_constant_value(self, name, value, err_prefix=''): + prnt = self._prnt + if value <= 0: + prnt(' if ((%s) > 0 || (long)(%s) != %dL) {' % ( + name, name, value)) + else: + prnt(' if ((%s) <= 0 || (unsigned long)(%s) != %dUL) {' % ( + name, name, value)) + prnt(' char buf[64];') + prnt(' if ((%s) <= 0)' % name) + prnt(' snprintf(buf, 63, "%%ld", (long)(%s));' % name) + prnt(' else') + prnt(' snprintf(buf, 63, "%%lu", (unsigned long)(%s));' % + name) + prnt(' PyErr_Format(_cffi_VerificationError,') + prnt(' "%s%s has the real value %s, not %s",') + prnt(' "%s", "%s", buf, "%d");' % ( + err_prefix, name, value)) + prnt(' return -1;') + prnt(' }') + + def _enum_funcname(self, prefix, name): + # "$enum_$1" => "___D_enum____D_1" + name = name.replace('$', '___D_') + return '_cffi_e_%s_%s' % (prefix, name) + + def _generate_cpy_enum_decl(self, tp, name, prefix='enum'): + if tp.partial: + for enumerator in tp.enumerators: + self._generate_cpy_const(True, enumerator, delayed=False) + return + # + funcname = self._enum_funcname(prefix, name) + prnt = self._prnt + prnt('static int %s(PyObject *lib)' % funcname) + prnt('{') + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + self._check_int_constant_value(enumerator, enumvalue, + "enum %s: " % name) + prnt(' return %s;' % self._chained_list_constants[True]) + self._chained_list_constants[True] = funcname + '(lib)' + prnt('}') + prnt() + + _generate_cpy_enum_collecttype = _generate_nothing + _generate_cpy_enum_method = _generate_nothing + + def _loading_cpy_enum(self, tp, name, module): + if tp.partial: + enumvalues = [getattr(module, enumerator) + for enumerator in tp.enumerators] + tp.enumvalues = tuple(enumvalues) + tp.partial_resolved = True + + def _loaded_cpy_enum(self, tp, name, module, library): + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + setattr(library, enumerator, enumvalue) + + # ---------- + # macros: for now only for integers + + def _generate_cpy_macro_decl(self, tp, name): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + self._generate_cpy_const(True, name, check_value=check_value) + + _generate_cpy_macro_collecttype = _generate_nothing + _generate_cpy_macro_method = _generate_nothing + _loading_cpy_macro = _loaded_noop + _loaded_cpy_macro = _loaded_noop + + # ---------- + # global variables + + def _generate_cpy_variable_collecttype(self, tp, name): + if isinstance(tp, model.ArrayType): + tp_ptr = model.PointerType(tp.item) + else: + tp_ptr = model.PointerType(tp) + self._do_collect_type(tp_ptr) + + def _generate_cpy_variable_decl(self, tp, name): + if isinstance(tp, model.ArrayType): + tp_ptr = model.PointerType(tp.item) + self._generate_cpy_const(False, name, tp, vartp=tp_ptr, + size_too = tp.length_is_unknown()) + else: + tp_ptr = model.PointerType(tp) + self._generate_cpy_const(False, name, tp_ptr, category='var') + + _generate_cpy_variable_method = _generate_nothing + _loading_cpy_variable = _loaded_noop + + def _loaded_cpy_variable(self, tp, name, module, library): + value = getattr(library, name) + if isinstance(tp, model.ArrayType): # int a[5] is "constant" in the + # sense that "a=..." is forbidden + if tp.length_is_unknown(): + assert isinstance(value, tuple) + (value, size) = value + BItemType = self.ffi._get_cached_btype(tp.item) + length, rest = divmod(size, self.ffi.sizeof(BItemType)) + if rest != 0: + raise VerificationError( + "bad size: %r does not seem to be an array of %s" % + (name, tp.item)) + tp = tp.resolve_length(length) + # 'value' is a which we have to replace with + # a if the N is actually known + if tp.length is not None: + BArray = self.ffi._get_cached_btype(tp) + value = self.ffi.cast(BArray, value) + setattr(library, name, value) + return + # remove ptr= from the library instance, and replace + # it by a property on the class, which reads/writes into ptr[0]. + ptr = value + delattr(library, name) + def getter(library): + return ptr[0] + def setter(library, value): + ptr[0] = value + setattr(type(library), name, property(getter, setter)) + type(library)._cffi_dir.append(name) + + # ---------- + + def _generate_setup_custom(self): + prnt = self._prnt + prnt('static int _cffi_setup_custom(PyObject *lib)') + prnt('{') + prnt(' return %s;' % self._chained_list_constants[True]) + prnt('}') + +cffimod_header = r''' +#include +#include + +/* this block of #ifs should be kept exactly identical between + c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py + and cffi/_cffi_include.h */ +#if defined(_MSC_VER) +# include /* for alloca() */ +# if _MSC_VER < 1600 /* MSVC < 2010 */ + typedef __int8 int8_t; + typedef __int16 int16_t; + typedef __int32 int32_t; + typedef __int64 int64_t; + typedef unsigned __int8 uint8_t; + typedef unsigned __int16 uint16_t; + typedef unsigned __int32 uint32_t; + typedef unsigned __int64 uint64_t; + typedef __int8 int_least8_t; + typedef __int16 int_least16_t; + typedef __int32 int_least32_t; + typedef __int64 int_least64_t; + typedef unsigned __int8 uint_least8_t; + typedef unsigned __int16 uint_least16_t; + typedef unsigned __int32 uint_least32_t; + typedef unsigned __int64 uint_least64_t; + typedef __int8 int_fast8_t; + typedef __int16 int_fast16_t; + typedef __int32 int_fast32_t; + typedef __int64 int_fast64_t; + typedef unsigned __int8 uint_fast8_t; + typedef unsigned __int16 uint_fast16_t; + typedef unsigned __int32 uint_fast32_t; + typedef unsigned __int64 uint_fast64_t; + typedef __int64 intmax_t; + typedef unsigned __int64 uintmax_t; +# else +# include +# endif +# if _MSC_VER < 1800 /* MSVC < 2013 */ +# ifndef __cplusplus + typedef unsigned char _Bool; +# endif +# endif +# define _cffi_float_complex_t _Fcomplex /* include for it */ +# define _cffi_double_complex_t _Dcomplex /* include for it */ +#else +# include +# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) +# include +# endif +# define _cffi_float_complex_t float _Complex +# define _cffi_double_complex_t double _Complex +#endif + +#if PY_MAJOR_VERSION < 3 +# undef PyCapsule_CheckExact +# undef PyCapsule_GetPointer +# define PyCapsule_CheckExact(capsule) (PyCObject_Check(capsule)) +# define PyCapsule_GetPointer(capsule, name) \ + (PyCObject_AsVoidPtr(capsule)) +#endif + +#if PY_MAJOR_VERSION >= 3 +# define PyInt_FromLong PyLong_FromLong +#endif + +#define _cffi_from_c_double PyFloat_FromDouble +#define _cffi_from_c_float PyFloat_FromDouble +#define _cffi_from_c_long PyInt_FromLong +#define _cffi_from_c_ulong PyLong_FromUnsignedLong +#define _cffi_from_c_longlong PyLong_FromLongLong +#define _cffi_from_c_ulonglong PyLong_FromUnsignedLongLong +#define _cffi_from_c__Bool PyBool_FromLong + +#define _cffi_to_c_double PyFloat_AsDouble +#define _cffi_to_c_float PyFloat_AsDouble + +#define _cffi_from_c_int_const(x) \ + (((x) > 0) ? \ + ((unsigned long long)(x) <= (unsigned long long)LONG_MAX) ? \ + PyInt_FromLong((long)(x)) : \ + PyLong_FromUnsignedLongLong((unsigned long long)(x)) : \ + ((long long)(x) >= (long long)LONG_MIN) ? \ + PyInt_FromLong((long)(x)) : \ + PyLong_FromLongLong((long long)(x))) + +#define _cffi_from_c_int(x, type) \ + (((type)-1) > 0 ? /* unsigned */ \ + (sizeof(type) < sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + sizeof(type) == sizeof(long) ? \ + PyLong_FromUnsignedLong((unsigned long)x) : \ + PyLong_FromUnsignedLongLong((unsigned long long)x)) : \ + (sizeof(type) <= sizeof(long) ? \ + PyInt_FromLong((long)x) : \ + PyLong_FromLongLong((long long)x))) + +#define _cffi_to_c_int(o, type) \ + ((type)( \ + sizeof(type) == 1 ? (((type)-1) > 0 ? (type)_cffi_to_c_u8(o) \ + : (type)_cffi_to_c_i8(o)) : \ + sizeof(type) == 2 ? (((type)-1) > 0 ? (type)_cffi_to_c_u16(o) \ + : (type)_cffi_to_c_i16(o)) : \ + sizeof(type) == 4 ? (((type)-1) > 0 ? (type)_cffi_to_c_u32(o) \ + : (type)_cffi_to_c_i32(o)) : \ + sizeof(type) == 8 ? (((type)-1) > 0 ? (type)_cffi_to_c_u64(o) \ + : (type)_cffi_to_c_i64(o)) : \ + (Py_FatalError("unsupported size for type " #type), (type)0))) + +#define _cffi_to_c_i8 \ + ((int(*)(PyObject *))_cffi_exports[1]) +#define _cffi_to_c_u8 \ + ((int(*)(PyObject *))_cffi_exports[2]) +#define _cffi_to_c_i16 \ + ((int(*)(PyObject *))_cffi_exports[3]) +#define _cffi_to_c_u16 \ + ((int(*)(PyObject *))_cffi_exports[4]) +#define _cffi_to_c_i32 \ + ((int(*)(PyObject *))_cffi_exports[5]) +#define _cffi_to_c_u32 \ + ((unsigned int(*)(PyObject *))_cffi_exports[6]) +#define _cffi_to_c_i64 \ + ((long long(*)(PyObject *))_cffi_exports[7]) +#define _cffi_to_c_u64 \ + ((unsigned long long(*)(PyObject *))_cffi_exports[8]) +#define _cffi_to_c_char \ + ((int(*)(PyObject *))_cffi_exports[9]) +#define _cffi_from_c_pointer \ + ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[10]) +#define _cffi_to_c_pointer \ + ((char *(*)(PyObject *, CTypeDescrObject *))_cffi_exports[11]) +#define _cffi_get_struct_layout \ + ((PyObject *(*)(Py_ssize_t[]))_cffi_exports[12]) +#define _cffi_restore_errno \ + ((void(*)(void))_cffi_exports[13]) +#define _cffi_save_errno \ + ((void(*)(void))_cffi_exports[14]) +#define _cffi_from_c_char \ + ((PyObject *(*)(char))_cffi_exports[15]) +#define _cffi_from_c_deref \ + ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[16]) +#define _cffi_to_c \ + ((int(*)(char *, CTypeDescrObject *, PyObject *))_cffi_exports[17]) +#define _cffi_from_c_struct \ + ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[18]) +#define _cffi_to_c_wchar_t \ + ((wchar_t(*)(PyObject *))_cffi_exports[19]) +#define _cffi_from_c_wchar_t \ + ((PyObject *(*)(wchar_t))_cffi_exports[20]) +#define _cffi_to_c_long_double \ + ((long double(*)(PyObject *))_cffi_exports[21]) +#define _cffi_to_c__Bool \ + ((_Bool(*)(PyObject *))_cffi_exports[22]) +#define _cffi_prepare_pointer_call_argument \ + ((Py_ssize_t(*)(CTypeDescrObject *, PyObject *, char **))_cffi_exports[23]) +#define _cffi_convert_array_from_object \ + ((int(*)(char *, CTypeDescrObject *, PyObject *))_cffi_exports[24]) +#define _CFFI_NUM_EXPORTS 25 + +typedef struct _ctypedescr CTypeDescrObject; + +static void *_cffi_exports[_CFFI_NUM_EXPORTS]; +static PyObject *_cffi_types, *_cffi_VerificationError; + +static int _cffi_setup_custom(PyObject *lib); /* forward */ + +static PyObject *_cffi_setup(PyObject *self, PyObject *args) +{ + PyObject *library; + int was_alive = (_cffi_types != NULL); + (void)self; /* unused */ + if (!PyArg_ParseTuple(args, "OOO", &_cffi_types, &_cffi_VerificationError, + &library)) + return NULL; + Py_INCREF(_cffi_types); + Py_INCREF(_cffi_VerificationError); + if (_cffi_setup_custom(library) < 0) + return NULL; + return PyBool_FromLong(was_alive); +} + +union _cffi_union_alignment_u { + unsigned char m_char; + unsigned short m_short; + unsigned int m_int; + unsigned long m_long; + unsigned long long m_longlong; + float m_float; + double m_double; + long double m_longdouble; +}; + +struct _cffi_freeme_s { + struct _cffi_freeme_s *next; + union _cffi_union_alignment_u alignment; +}; + +#ifdef __GNUC__ + __attribute__((unused)) +#endif +static int _cffi_convert_array_argument(CTypeDescrObject *ctptr, PyObject *arg, + char **output_data, Py_ssize_t datasize, + struct _cffi_freeme_s **freeme) +{ + char *p; + if (datasize < 0) + return -1; + + p = *output_data; + if (p == NULL) { + struct _cffi_freeme_s *fp = (struct _cffi_freeme_s *)PyObject_Malloc( + offsetof(struct _cffi_freeme_s, alignment) + (size_t)datasize); + if (fp == NULL) + return -1; + fp->next = *freeme; + *freeme = fp; + p = *output_data = (char *)&fp->alignment; + } + memset((void *)p, 0, (size_t)datasize); + return _cffi_convert_array_from_object(p, ctptr, arg); +} + +#ifdef __GNUC__ + __attribute__((unused)) +#endif +static void _cffi_free_array_arguments(struct _cffi_freeme_s *freeme) +{ + do { + void *p = (void *)freeme; + freeme = freeme->next; + PyObject_Free(p); + } while (freeme != NULL); +} + +static int _cffi_init(void) +{ + PyObject *module, *c_api_object = NULL; + + module = PyImport_ImportModule("_cffi_backend"); + if (module == NULL) + goto failure; + + c_api_object = PyObject_GetAttrString(module, "_C_API"); + if (c_api_object == NULL) + goto failure; + if (!PyCapsule_CheckExact(c_api_object)) { + PyErr_SetNone(PyExc_ImportError); + goto failure; + } + memcpy(_cffi_exports, PyCapsule_GetPointer(c_api_object, "cffi"), + _CFFI_NUM_EXPORTS * sizeof(void *)); + + Py_DECREF(module); + Py_DECREF(c_api_object); + return 0; + + failure: + Py_XDECREF(module); + Py_XDECREF(c_api_object); + return -1; +} + +#define _cffi_type(num) ((CTypeDescrObject *)PyList_GET_ITEM(_cffi_types, num)) + +/**********/ +''' diff --git a/templates/skills/file_manager/dependencies/cffi/vengine_gen.py b/templates/skills/file_manager/dependencies/cffi/vengine_gen.py new file mode 100644 index 00000000..bffc8212 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/vengine_gen.py @@ -0,0 +1,679 @@ +# +# DEPRECATED: implementation for ffi.verify() +# +import sys, os +import types + +from . import model +from .error import VerificationError + + +class VGenericEngine(object): + _class_key = 'g' + _gen_python_module = False + + def __init__(self, verifier): + self.verifier = verifier + self.ffi = verifier.ffi + self.export_symbols = [] + self._struct_pending_verification = {} + + def patch_extension_kwds(self, kwds): + # add 'export_symbols' to the dictionary. Note that we add the + # list before filling it. When we fill it, it will thus also show + # up in kwds['export_symbols']. + kwds.setdefault('export_symbols', self.export_symbols) + + def find_module(self, module_name, path, so_suffixes): + for so_suffix in so_suffixes: + basename = module_name + so_suffix + if path is None: + path = sys.path + for dirname in path: + filename = os.path.join(dirname, basename) + if os.path.isfile(filename): + return filename + + def collect_types(self): + pass # not needed in the generic engine + + def _prnt(self, what=''): + self._f.write(what + '\n') + + def write_source_to_f(self): + prnt = self._prnt + # first paste some standard set of lines that are mostly '#include' + prnt(cffimod_header) + # then paste the C source given by the user, verbatim. + prnt(self.verifier.preamble) + # + # call generate_gen_xxx_decl(), for every xxx found from + # ffi._parser._declarations. This generates all the functions. + self._generate('decl') + # + # on Windows, distutils insists on putting init_cffi_xyz in + # 'export_symbols', so instead of fighting it, just give up and + # give it one + if sys.platform == 'win32': + if sys.version_info >= (3,): + prefix = 'PyInit_' + else: + prefix = 'init' + modname = self.verifier.get_module_name() + prnt("void %s%s(void) { }\n" % (prefix, modname)) + + def load_library(self, flags=0): + # import it with the CFFI backend + backend = self.ffi._backend + # needs to make a path that contains '/', on Posix + filename = os.path.join(os.curdir, self.verifier.modulefilename) + module = backend.load_library(filename, flags) + # + # call loading_gen_struct() to get the struct layout inferred by + # the C compiler + self._load(module, 'loading') + + # build the FFILibrary class and instance, this is a module subclass + # because modules are expected to have usually-constant-attributes and + # in PyPy this means the JIT is able to treat attributes as constant, + # which we want. + class FFILibrary(types.ModuleType): + _cffi_generic_module = module + _cffi_ffi = self.ffi + _cffi_dir = [] + def __dir__(self): + return FFILibrary._cffi_dir + library = FFILibrary("") + # + # finally, call the loaded_gen_xxx() functions. This will set + # up the 'library' object. + self._load(module, 'loaded', library=library) + return library + + def _get_declarations(self): + lst = [(key, tp) for (key, (tp, qual)) in + self.ffi._parser._declarations.items()] + lst.sort() + return lst + + def _generate(self, step_name): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + try: + method = getattr(self, '_generate_gen_%s_%s' % (kind, + step_name)) + except AttributeError: + raise VerificationError( + "not implemented in verify(): %r" % name) + try: + method(tp, realname) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _load(self, module, step_name, **kwds): + for name, tp in self._get_declarations(): + kind, realname = name.split(' ', 1) + method = getattr(self, '_%s_gen_%s' % (step_name, kind)) + try: + method(tp, realname, module, **kwds) + except Exception as e: + model.attach_exception_info(e, name) + raise + + def _generate_nothing(self, tp, name): + pass + + def _loaded_noop(self, tp, name, module, **kwds): + pass + + # ---------- + # typedefs: generates no code so far + + _generate_gen_typedef_decl = _generate_nothing + _loading_gen_typedef = _loaded_noop + _loaded_gen_typedef = _loaded_noop + + # ---------- + # function declarations + + def _generate_gen_function_decl(self, tp, name): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + # cannot support vararg functions better than this: check for its + # exact type (including the fixed arguments), and build it as a + # constant function pointer (no _cffi_f_%s wrapper) + self._generate_gen_const(False, name, tp) + return + prnt = self._prnt + numargs = len(tp.args) + argnames = [] + for i, type in enumerate(tp.args): + indirection = '' + if isinstance(type, model.StructOrUnion): + indirection = '*' + argnames.append('%sx%d' % (indirection, i)) + context = 'argument of %s' % name + arglist = [type.get_c_name(' %s' % arg, context) + for type, arg in zip(tp.args, argnames)] + tpresult = tp.result + if isinstance(tpresult, model.StructOrUnion): + arglist.insert(0, tpresult.get_c_name(' *r', context)) + tpresult = model.void_type + arglist = ', '.join(arglist) or 'void' + wrappername = '_cffi_f_%s' % name + self.export_symbols.append(wrappername) + if tp.abi: + abi = tp.abi + ' ' + else: + abi = '' + funcdecl = ' %s%s(%s)' % (abi, wrappername, arglist) + context = 'result of %s' % name + prnt(tpresult.get_c_name(funcdecl, context)) + prnt('{') + # + if isinstance(tp.result, model.StructOrUnion): + result_code = '*r = ' + elif not isinstance(tp.result, model.VoidType): + result_code = 'return ' + else: + result_code = '' + prnt(' %s%s(%s);' % (result_code, name, ', '.join(argnames))) + prnt('}') + prnt() + + _loading_gen_function = _loaded_noop + + def _loaded_gen_function(self, tp, name, module, library): + assert isinstance(tp, model.FunctionPtrType) + if tp.ellipsis: + newfunction = self._load_constant(False, tp, name, module) + else: + indirections = [] + base_tp = tp + if (any(isinstance(typ, model.StructOrUnion) for typ in tp.args) + or isinstance(tp.result, model.StructOrUnion)): + indirect_args = [] + for i, typ in enumerate(tp.args): + if isinstance(typ, model.StructOrUnion): + typ = model.PointerType(typ) + indirections.append((i, typ)) + indirect_args.append(typ) + indirect_result = tp.result + if isinstance(indirect_result, model.StructOrUnion): + if indirect_result.fldtypes is None: + raise TypeError("'%s' is used as result type, " + "but is opaque" % ( + indirect_result._get_c_name(),)) + indirect_result = model.PointerType(indirect_result) + indirect_args.insert(0, indirect_result) + indirections.insert(0, ("result", indirect_result)) + indirect_result = model.void_type + tp = model.FunctionPtrType(tuple(indirect_args), + indirect_result, tp.ellipsis) + BFunc = self.ffi._get_cached_btype(tp) + wrappername = '_cffi_f_%s' % name + newfunction = module.load_function(BFunc, wrappername) + for i, typ in indirections: + newfunction = self._make_struct_wrapper(newfunction, i, typ, + base_tp) + setattr(library, name, newfunction) + type(library)._cffi_dir.append(name) + + def _make_struct_wrapper(self, oldfunc, i, tp, base_tp): + backend = self.ffi._backend + BType = self.ffi._get_cached_btype(tp) + if i == "result": + ffi = self.ffi + def newfunc(*args): + res = ffi.new(BType) + oldfunc(res, *args) + return res[0] + else: + def newfunc(*args): + args = args[:i] + (backend.newp(BType, args[i]),) + args[i+1:] + return oldfunc(*args) + newfunc._cffi_base_type = base_tp + return newfunc + + # ---------- + # named structs + + def _generate_gen_struct_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'struct', name) + + def _loading_gen_struct(self, tp, name, module): + self._loading_struct_or_union(tp, 'struct', name, module) + + def _loaded_gen_struct(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + def _generate_gen_union_decl(self, tp, name): + assert name == tp.name + self._generate_struct_or_union_decl(tp, 'union', name) + + def _loading_gen_union(self, tp, name, module): + self._loading_struct_or_union(tp, 'union', name, module) + + def _loaded_gen_union(self, tp, name, module, **kwds): + self._loaded_struct_or_union(tp) + + def _generate_struct_or_union_decl(self, tp, prefix, name): + if tp.fldnames is None: + return # nothing to do with opaque structs + checkfuncname = '_cffi_check_%s_%s' % (prefix, name) + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + cname = ('%s %s' % (prefix, name)).strip() + # + prnt = self._prnt + prnt('static void %s(%s *p)' % (checkfuncname, cname)) + prnt('{') + prnt(' /* only to generate compile-time warnings or errors */') + prnt(' (void)p;') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if (isinstance(ftype, model.PrimitiveType) + and ftype.is_integer_type()) or fbitsize >= 0: + # accept all integers, but complain on float or double + prnt(' (void)((p->%s) << 1);' % fname) + else: + # only accept exactly the type declared. + try: + prnt(' { %s = &p->%s; (void)tmp; }' % ( + ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), + fname)) + except VerificationError as e: + prnt(' /* %s */' % str(e)) # cannot verify it, ignore + prnt('}') + self.export_symbols.append(layoutfuncname) + prnt('intptr_t %s(intptr_t i)' % (layoutfuncname,)) + prnt('{') + prnt(' struct _cffi_aligncheck { char x; %s y; };' % cname) + prnt(' static intptr_t nums[] = {') + prnt(' sizeof(%s),' % cname) + prnt(' offsetof(struct _cffi_aligncheck, y),') + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + prnt(' offsetof(%s, %s),' % (cname, fname)) + if isinstance(ftype, model.ArrayType) and ftype.length is None: + prnt(' 0, /* %s */' % ftype._get_c_name()) + else: + prnt(' sizeof(((%s *)0)->%s),' % (cname, fname)) + prnt(' -1') + prnt(' };') + prnt(' return nums[i];') + prnt(' /* the next line is not executed, but compiled */') + prnt(' %s(0);' % (checkfuncname,)) + prnt('}') + prnt() + + def _loading_struct_or_union(self, tp, prefix, name, module): + if tp.fldnames is None: + return # nothing to do with opaque structs + layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) + # + BFunc = self.ffi._typeof_locked("intptr_t(*)(intptr_t)")[0] + function = module.load_function(BFunc, layoutfuncname) + layout = [] + num = 0 + while True: + x = function(num) + if x < 0: break + layout.append(x) + num += 1 + if isinstance(tp, model.StructOrUnion) and tp.partial: + # use the function()'s sizes and offsets to guide the + # layout of the struct + totalsize = layout[0] + totalalignment = layout[1] + fieldofs = layout[2::2] + fieldsize = layout[3::2] + tp.force_flatten() + assert len(fieldofs) == len(fieldsize) == len(tp.fldnames) + tp.fixedlayout = fieldofs, fieldsize, totalsize, totalalignment + else: + cname = ('%s %s' % (prefix, name)).strip() + self._struct_pending_verification[tp] = layout, cname + + def _loaded_struct_or_union(self, tp): + if tp.fldnames is None: + return # nothing to do with opaque structs + self.ffi._get_cached_btype(tp) # force 'fixedlayout' to be considered + + if tp in self._struct_pending_verification: + # check that the layout sizes and offsets match the real ones + def check(realvalue, expectedvalue, msg): + if realvalue != expectedvalue: + raise VerificationError( + "%s (we have %d, but C compiler says %d)" + % (msg, expectedvalue, realvalue)) + ffi = self.ffi + BStruct = ffi._get_cached_btype(tp) + layout, cname = self._struct_pending_verification.pop(tp) + check(layout[0], ffi.sizeof(BStruct), "wrong total size") + check(layout[1], ffi.alignof(BStruct), "wrong total alignment") + i = 2 + for fname, ftype, fbitsize, fqual in tp.enumfields(): + if fbitsize >= 0: + continue # xxx ignore fbitsize for now + check(layout[i], ffi.offsetof(BStruct, fname), + "wrong offset for field %r" % (fname,)) + if layout[i+1] != 0: + BField = ffi._get_cached_btype(ftype) + check(layout[i+1], ffi.sizeof(BField), + "wrong size for field %r" % (fname,)) + i += 2 + assert i == len(layout) + + # ---------- + # 'anonymous' declarations. These are produced for anonymous structs + # or unions; the 'name' is obtained by a typedef. + + def _generate_gen_anonymous_decl(self, tp, name): + if isinstance(tp, model.EnumType): + self._generate_gen_enum_decl(tp, name, '') + else: + self._generate_struct_or_union_decl(tp, '', name) + + def _loading_gen_anonymous(self, tp, name, module): + if isinstance(tp, model.EnumType): + self._loading_gen_enum(tp, name, module, '') + else: + self._loading_struct_or_union(tp, '', name, module) + + def _loaded_gen_anonymous(self, tp, name, module, **kwds): + if isinstance(tp, model.EnumType): + self._loaded_gen_enum(tp, name, module, **kwds) + else: + self._loaded_struct_or_union(tp) + + # ---------- + # constants, likely declared with '#define' + + def _generate_gen_const(self, is_int, name, tp=None, category='const', + check_value=None): + prnt = self._prnt + funcname = '_cffi_%s_%s' % (category, name) + self.export_symbols.append(funcname) + if check_value is not None: + assert is_int + assert category == 'const' + prnt('int %s(char *out_error)' % funcname) + prnt('{') + self._check_int_constant_value(name, check_value) + prnt(' return 0;') + prnt('}') + elif is_int: + assert category == 'const' + prnt('int %s(long long *out_value)' % funcname) + prnt('{') + prnt(' *out_value = (long long)(%s);' % (name,)) + prnt(' return (%s) <= 0;' % (name,)) + prnt('}') + else: + assert tp is not None + assert check_value is None + if category == 'var': + ampersand = '&' + else: + ampersand = '' + extra = '' + if category == 'const' and isinstance(tp, model.StructOrUnion): + extra = 'const *' + ampersand = '&' + prnt(tp.get_c_name(' %s%s(void)' % (extra, funcname), name)) + prnt('{') + prnt(' return (%s%s);' % (ampersand, name)) + prnt('}') + prnt() + + def _generate_gen_constant_decl(self, tp, name): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + self._generate_gen_const(is_int, name, tp) + + _loading_gen_constant = _loaded_noop + + def _load_constant(self, is_int, tp, name, module, check_value=None): + funcname = '_cffi_const_%s' % name + if check_value is not None: + assert is_int + self._load_known_int_constant(module, funcname) + value = check_value + elif is_int: + BType = self.ffi._typeof_locked("long long*")[0] + BFunc = self.ffi._typeof_locked("int(*)(long long*)")[0] + function = module.load_function(BFunc, funcname) + p = self.ffi.new(BType) + negative = function(p) + value = int(p[0]) + if value < 0 and not negative: + BLongLong = self.ffi._typeof_locked("long long")[0] + value += (1 << (8*self.ffi.sizeof(BLongLong))) + else: + assert check_value is None + fntypeextra = '(*)(void)' + if isinstance(tp, model.StructOrUnion): + fntypeextra = '*' + fntypeextra + BFunc = self.ffi._typeof_locked(tp.get_c_name(fntypeextra, name))[0] + function = module.load_function(BFunc, funcname) + value = function() + if isinstance(tp, model.StructOrUnion): + value = value[0] + return value + + def _loaded_gen_constant(self, tp, name, module, library): + is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() + value = self._load_constant(is_int, tp, name, module) + setattr(library, name, value) + type(library)._cffi_dir.append(name) + + # ---------- + # enums + + def _check_int_constant_value(self, name, value): + prnt = self._prnt + if value <= 0: + prnt(' if ((%s) > 0 || (long)(%s) != %dL) {' % ( + name, name, value)) + else: + prnt(' if ((%s) <= 0 || (unsigned long)(%s) != %dUL) {' % ( + name, name, value)) + prnt(' char buf[64];') + prnt(' if ((%s) <= 0)' % name) + prnt(' sprintf(buf, "%%ld", (long)(%s));' % name) + prnt(' else') + prnt(' sprintf(buf, "%%lu", (unsigned long)(%s));' % + name) + prnt(' sprintf(out_error, "%s has the real value %s, not %s",') + prnt(' "%s", buf, "%d");' % (name[:100], value)) + prnt(' return -1;') + prnt(' }') + + def _load_known_int_constant(self, module, funcname): + BType = self.ffi._typeof_locked("char[]")[0] + BFunc = self.ffi._typeof_locked("int(*)(char*)")[0] + function = module.load_function(BFunc, funcname) + p = self.ffi.new(BType, 256) + if function(p) < 0: + error = self.ffi.string(p) + if sys.version_info >= (3,): + error = str(error, 'utf-8') + raise VerificationError(error) + + def _enum_funcname(self, prefix, name): + # "$enum_$1" => "___D_enum____D_1" + name = name.replace('$', '___D_') + return '_cffi_e_%s_%s' % (prefix, name) + + def _generate_gen_enum_decl(self, tp, name, prefix='enum'): + if tp.partial: + for enumerator in tp.enumerators: + self._generate_gen_const(True, enumerator) + return + # + funcname = self._enum_funcname(prefix, name) + self.export_symbols.append(funcname) + prnt = self._prnt + prnt('int %s(char *out_error)' % funcname) + prnt('{') + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + self._check_int_constant_value(enumerator, enumvalue) + prnt(' return 0;') + prnt('}') + prnt() + + def _loading_gen_enum(self, tp, name, module, prefix='enum'): + if tp.partial: + enumvalues = [self._load_constant(True, tp, enumerator, module) + for enumerator in tp.enumerators] + tp.enumvalues = tuple(enumvalues) + tp.partial_resolved = True + else: + funcname = self._enum_funcname(prefix, name) + self._load_known_int_constant(module, funcname) + + def _loaded_gen_enum(self, tp, name, module, library): + for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): + setattr(library, enumerator, enumvalue) + type(library)._cffi_dir.append(enumerator) + + # ---------- + # macros: for now only for integers + + def _generate_gen_macro_decl(self, tp, name): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + self._generate_gen_const(True, name, check_value=check_value) + + _loading_gen_macro = _loaded_noop + + def _loaded_gen_macro(self, tp, name, module, library): + if tp == '...': + check_value = None + else: + check_value = tp # an integer + value = self._load_constant(True, tp, name, module, + check_value=check_value) + setattr(library, name, value) + type(library)._cffi_dir.append(name) + + # ---------- + # global variables + + def _generate_gen_variable_decl(self, tp, name): + if isinstance(tp, model.ArrayType): + if tp.length_is_unknown(): + prnt = self._prnt + funcname = '_cffi_sizeof_%s' % (name,) + self.export_symbols.append(funcname) + prnt("size_t %s(void)" % funcname) + prnt("{") + prnt(" return sizeof(%s);" % (name,)) + prnt("}") + tp_ptr = model.PointerType(tp.item) + self._generate_gen_const(False, name, tp_ptr) + else: + tp_ptr = model.PointerType(tp) + self._generate_gen_const(False, name, tp_ptr, category='var') + + _loading_gen_variable = _loaded_noop + + def _loaded_gen_variable(self, tp, name, module, library): + if isinstance(tp, model.ArrayType): # int a[5] is "constant" in the + # sense that "a=..." is forbidden + if tp.length_is_unknown(): + funcname = '_cffi_sizeof_%s' % (name,) + BFunc = self.ffi._typeof_locked('size_t(*)(void)')[0] + function = module.load_function(BFunc, funcname) + size = function() + BItemType = self.ffi._get_cached_btype(tp.item) + length, rest = divmod(size, self.ffi.sizeof(BItemType)) + if rest != 0: + raise VerificationError( + "bad size: %r does not seem to be an array of %s" % + (name, tp.item)) + tp = tp.resolve_length(length) + tp_ptr = model.PointerType(tp.item) + value = self._load_constant(False, tp_ptr, name, module) + # 'value' is a which we have to replace with + # a if the N is actually known + if tp.length is not None: + BArray = self.ffi._get_cached_btype(tp) + value = self.ffi.cast(BArray, value) + setattr(library, name, value) + type(library)._cffi_dir.append(name) + return + # remove ptr= from the library instance, and replace + # it by a property on the class, which reads/writes into ptr[0]. + funcname = '_cffi_var_%s' % name + BFunc = self.ffi._typeof_locked(tp.get_c_name('*(*)(void)', name))[0] + function = module.load_function(BFunc, funcname) + ptr = function() + def getter(library): + return ptr[0] + def setter(library, value): + ptr[0] = value + setattr(type(library), name, property(getter, setter)) + type(library)._cffi_dir.append(name) + +cffimod_header = r''' +#include +#include +#include +#include +#include /* XXX for ssize_t on some platforms */ + +/* this block of #ifs should be kept exactly identical between + c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py + and cffi/_cffi_include.h */ +#if defined(_MSC_VER) +# include /* for alloca() */ +# if _MSC_VER < 1600 /* MSVC < 2010 */ + typedef __int8 int8_t; + typedef __int16 int16_t; + typedef __int32 int32_t; + typedef __int64 int64_t; + typedef unsigned __int8 uint8_t; + typedef unsigned __int16 uint16_t; + typedef unsigned __int32 uint32_t; + typedef unsigned __int64 uint64_t; + typedef __int8 int_least8_t; + typedef __int16 int_least16_t; + typedef __int32 int_least32_t; + typedef __int64 int_least64_t; + typedef unsigned __int8 uint_least8_t; + typedef unsigned __int16 uint_least16_t; + typedef unsigned __int32 uint_least32_t; + typedef unsigned __int64 uint_least64_t; + typedef __int8 int_fast8_t; + typedef __int16 int_fast16_t; + typedef __int32 int_fast32_t; + typedef __int64 int_fast64_t; + typedef unsigned __int8 uint_fast8_t; + typedef unsigned __int16 uint_fast16_t; + typedef unsigned __int32 uint_fast32_t; + typedef unsigned __int64 uint_fast64_t; + typedef __int64 intmax_t; + typedef unsigned __int64 uintmax_t; +# else +# include +# endif +# if _MSC_VER < 1800 /* MSVC < 2013 */ +# ifndef __cplusplus + typedef unsigned char _Bool; +# endif +# endif +# define _cffi_float_complex_t _Fcomplex /* include for it */ +# define _cffi_double_complex_t _Dcomplex /* include for it */ +#else +# include +# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) +# include +# endif +# define _cffi_float_complex_t float _Complex +# define _cffi_double_complex_t double _Complex +#endif +''' diff --git a/templates/skills/file_manager/dependencies/cffi/verifier.py b/templates/skills/file_manager/dependencies/cffi/verifier.py new file mode 100644 index 00000000..e392a2b7 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cffi/verifier.py @@ -0,0 +1,306 @@ +# +# DEPRECATED: implementation for ffi.verify() +# +import sys, os, binascii, shutil, io +from . import __version_verifier_modules__ +from . import ffiplatform +from .error import VerificationError + +if sys.version_info >= (3, 3): + import importlib.machinery + def _extension_suffixes(): + return importlib.machinery.EXTENSION_SUFFIXES[:] +else: + import imp + def _extension_suffixes(): + return [suffix for suffix, _, type in imp.get_suffixes() + if type == imp.C_EXTENSION] + + +if sys.version_info >= (3,): + NativeIO = io.StringIO +else: + class NativeIO(io.BytesIO): + def write(self, s): + if isinstance(s, unicode): + s = s.encode('ascii') + super(NativeIO, self).write(s) + + +class Verifier(object): + + def __init__(self, ffi, preamble, tmpdir=None, modulename=None, + ext_package=None, tag='', force_generic_engine=False, + source_extension='.c', flags=None, relative_to=None, **kwds): + if ffi._parser._uses_new_feature: + raise VerificationError( + "feature not supported with ffi.verify(), but only " + "with ffi.set_source(): %s" % (ffi._parser._uses_new_feature,)) + self.ffi = ffi + self.preamble = preamble + if not modulename: + flattened_kwds = ffiplatform.flatten(kwds) + vengine_class = _locate_engine_class(ffi, force_generic_engine) + self._vengine = vengine_class(self) + self._vengine.patch_extension_kwds(kwds) + self.flags = flags + self.kwds = self.make_relative_to(kwds, relative_to) + # + if modulename: + if tag: + raise TypeError("can't specify both 'modulename' and 'tag'") + else: + key = '\x00'.join(['%d.%d' % sys.version_info[:2], + __version_verifier_modules__, + preamble, flattened_kwds] + + ffi._cdefsources) + if sys.version_info >= (3,): + key = key.encode('utf-8') + k1 = hex(binascii.crc32(key[0::2]) & 0xffffffff) + k1 = k1.lstrip('0x').rstrip('L') + k2 = hex(binascii.crc32(key[1::2]) & 0xffffffff) + k2 = k2.lstrip('0').rstrip('L') + modulename = '_cffi_%s_%s%s%s' % (tag, self._vengine._class_key, + k1, k2) + suffix = _get_so_suffixes()[0] + self.tmpdir = tmpdir or _caller_dir_pycache() + self.sourcefilename = os.path.join(self.tmpdir, modulename + source_extension) + self.modulefilename = os.path.join(self.tmpdir, modulename + suffix) + self.ext_package = ext_package + self._has_source = False + self._has_module = False + + def write_source(self, file=None): + """Write the C source code. It is produced in 'self.sourcefilename', + which can be tweaked beforehand.""" + with self.ffi._lock: + if self._has_source and file is None: + raise VerificationError( + "source code already written") + self._write_source(file) + + def compile_module(self): + """Write the C source code (if not done already) and compile it. + This produces a dynamic link library in 'self.modulefilename'.""" + with self.ffi._lock: + if self._has_module: + raise VerificationError("module already compiled") + if not self._has_source: + self._write_source() + self._compile_module() + + def load_library(self): + """Get a C module from this Verifier instance. + Returns an instance of a FFILibrary class that behaves like the + objects returned by ffi.dlopen(), but that delegates all + operations to the C module. If necessary, the C code is written + and compiled first. + """ + with self.ffi._lock: + if not self._has_module: + self._locate_module() + if not self._has_module: + if not self._has_source: + self._write_source() + self._compile_module() + return self._load_library() + + def get_module_name(self): + basename = os.path.basename(self.modulefilename) + # kill both the .so extension and the other .'s, as introduced + # by Python 3: 'basename.cpython-33m.so' + basename = basename.split('.', 1)[0] + # and the _d added in Python 2 debug builds --- but try to be + # conservative and not kill a legitimate _d + if basename.endswith('_d') and hasattr(sys, 'gettotalrefcount'): + basename = basename[:-2] + return basename + + def get_extension(self): + if not self._has_source: + with self.ffi._lock: + if not self._has_source: + self._write_source() + sourcename = ffiplatform.maybe_relative_path(self.sourcefilename) + modname = self.get_module_name() + return ffiplatform.get_extension(sourcename, modname, **self.kwds) + + def generates_python_module(self): + return self._vengine._gen_python_module + + def make_relative_to(self, kwds, relative_to): + if relative_to and os.path.dirname(relative_to): + dirname = os.path.dirname(relative_to) + kwds = kwds.copy() + for key in ffiplatform.LIST_OF_FILE_NAMES: + if key in kwds: + lst = kwds[key] + if not isinstance(lst, (list, tuple)): + raise TypeError("keyword '%s' should be a list or tuple" + % (key,)) + lst = [os.path.join(dirname, fn) for fn in lst] + kwds[key] = lst + return kwds + + # ---------- + + def _locate_module(self): + if not os.path.isfile(self.modulefilename): + if self.ext_package: + try: + pkg = __import__(self.ext_package, None, None, ['__doc__']) + except ImportError: + return # cannot import the package itself, give up + # (e.g. it might be called differently before installation) + path = pkg.__path__ + else: + path = None + filename = self._vengine.find_module(self.get_module_name(), path, + _get_so_suffixes()) + if filename is None: + return + self.modulefilename = filename + self._vengine.collect_types() + self._has_module = True + + def _write_source_to(self, file): + self._vengine._f = file + try: + self._vengine.write_source_to_f() + finally: + del self._vengine._f + + def _write_source(self, file=None): + if file is not None: + self._write_source_to(file) + else: + # Write our source file to an in memory file. + f = NativeIO() + self._write_source_to(f) + source_data = f.getvalue() + + # Determine if this matches the current file + if os.path.exists(self.sourcefilename): + with open(self.sourcefilename, "r") as fp: + needs_written = not (fp.read() == source_data) + else: + needs_written = True + + # Actually write the file out if it doesn't match + if needs_written: + _ensure_dir(self.sourcefilename) + with open(self.sourcefilename, "w") as fp: + fp.write(source_data) + + # Set this flag + self._has_source = True + + def _compile_module(self): + # compile this C source + tmpdir = os.path.dirname(self.sourcefilename) + outputfilename = ffiplatform.compile(tmpdir, self.get_extension()) + try: + same = ffiplatform.samefile(outputfilename, self.modulefilename) + except OSError: + same = False + if not same: + _ensure_dir(self.modulefilename) + shutil.move(outputfilename, self.modulefilename) + self._has_module = True + + def _load_library(self): + assert self._has_module + if self.flags is not None: + return self._vengine.load_library(self.flags) + else: + return self._vengine.load_library() + +# ____________________________________________________________ + +_FORCE_GENERIC_ENGINE = False # for tests + +def _locate_engine_class(ffi, force_generic_engine): + if _FORCE_GENERIC_ENGINE: + force_generic_engine = True + if not force_generic_engine: + if '__pypy__' in sys.builtin_module_names: + force_generic_engine = True + else: + try: + import _cffi_backend + except ImportError: + _cffi_backend = '?' + if ffi._backend is not _cffi_backend: + force_generic_engine = True + if force_generic_engine: + from . import vengine_gen + return vengine_gen.VGenericEngine + else: + from . import vengine_cpy + return vengine_cpy.VCPythonEngine + +# ____________________________________________________________ + +_TMPDIR = None + +def _caller_dir_pycache(): + if _TMPDIR: + return _TMPDIR + result = os.environ.get('CFFI_TMPDIR') + if result: + return result + filename = sys._getframe(2).f_code.co_filename + return os.path.abspath(os.path.join(os.path.dirname(filename), + '__pycache__')) + +def set_tmpdir(dirname): + """Set the temporary directory to use instead of __pycache__.""" + global _TMPDIR + _TMPDIR = dirname + +def cleanup_tmpdir(tmpdir=None, keep_so=False): + """Clean up the temporary directory by removing all files in it + called `_cffi_*.{c,so}` as well as the `build` subdirectory.""" + tmpdir = tmpdir or _caller_dir_pycache() + try: + filelist = os.listdir(tmpdir) + except OSError: + return + if keep_so: + suffix = '.c' # only remove .c files + else: + suffix = _get_so_suffixes()[0].lower() + for fn in filelist: + if fn.lower().startswith('_cffi_') and ( + fn.lower().endswith(suffix) or fn.lower().endswith('.c')): + try: + os.unlink(os.path.join(tmpdir, fn)) + except OSError: + pass + clean_dir = [os.path.join(tmpdir, 'build')] + for dir in clean_dir: + try: + for fn in os.listdir(dir): + fn = os.path.join(dir, fn) + if os.path.isdir(fn): + clean_dir.append(fn) + else: + os.unlink(fn) + except OSError: + pass + +def _get_so_suffixes(): + suffixes = _extension_suffixes() + if not suffixes: + # bah, no C_EXTENSION available. Occurs on pypy without cpyext + if sys.platform == 'win32': + suffixes = [".pyd"] + else: + suffixes = [".so"] + + return suffixes + +def _ensure_dir(filename): + dirname = os.path.dirname(filename) + if dirname and not os.path.isdir(dirname): + os.makedirs(dirname) diff --git a/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/INSTALLER b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/LICENSE b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/LICENSE new file mode 100644 index 00000000..ad82355b --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2019 TAHRI Ahmed R. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/METADATA b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/METADATA new file mode 100644 index 00000000..822550e3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/METADATA @@ -0,0 +1,683 @@ +Metadata-Version: 2.1 +Name: charset-normalizer +Version: 3.3.2 +Summary: The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet. +Home-page: https://github.com/Ousret/charset_normalizer +Author: Ahmed TAHRI +Author-email: ahmed.tahri@cloudnursery.dev +License: MIT +Project-URL: Bug Reports, https://github.com/Ousret/charset_normalizer/issues +Project-URL: Documentation, https://charset-normalizer.readthedocs.io/en/latest +Keywords: encoding,charset,charset-detector,detector,normalization,unicode,chardet,detect +Classifier: Development Status :: 5 - Production/Stable +Classifier: License :: OSI Approved :: MIT License +Classifier: Intended Audience :: Developers +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3.12 +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Text Processing :: Linguistic +Classifier: Topic :: Utilities +Classifier: Typing :: Typed +Requires-Python: >=3.7.0 +Description-Content-Type: text/markdown +License-File: LICENSE +Provides-Extra: unicode_backport + +

Charset Detection, for Everyone 👋

+ +

+ The Real First Universal Charset Detector
+ + + + + Download Count Total + + + + +

+

+ Featured Packages
+ + Static Badge + + + Static Badge + +

+

+ In other language (unofficial port - by the community)
+ + Static Badge + +

+ +> A library that helps you read text from an unknown charset encoding.
Motivated by `chardet`, +> I'm trying to resolve the issue by taking a new approach. +> All IANA character set names for which the Python core library provides codecs are supported. + +

+ >>>>> 👉 Try Me Online Now, Then Adopt Me 👈 <<<<< +

+ +This project offers you an alternative to **Universal Charset Encoding Detector**, also known as **Chardet**. + +| Feature | [Chardet](https://github.com/chardet/chardet) | Charset Normalizer | [cChardet](https://github.com/PyYoshi/cChardet) | +|--------------------------------------------------|:---------------------------------------------:|:--------------------------------------------------------------------------------------------------:|:-----------------------------------------------:| +| `Fast` | ❌ | ✅ | ✅ | +| `Universal**` | ❌ | ✅ | ❌ | +| `Reliable` **without** distinguishable standards | ❌ | ✅ | ✅ | +| `Reliable` **with** distinguishable standards | ✅ | ✅ | ✅ | +| `License` | LGPL-2.1
_restrictive_ | MIT | MPL-1.1
_restrictive_ | +| `Native Python` | ✅ | ✅ | ❌ | +| `Detect spoken language` | ❌ | ✅ | N/A | +| `UnicodeDecodeError Safety` | ❌ | ✅ | ❌ | +| `Whl Size (min)` | 193.6 kB | 42 kB | ~200 kB | +| `Supported Encoding` | 33 | 🎉 [99](https://charset-normalizer.readthedocs.io/en/latest/user/support.html#supported-encodings) | 40 | + +

+Reading Normalized TextCat Reading Text +

+ +*\*\* : They are clearly using specific code for a specific encoding even if covering most of used one*
+Did you got there because of the logs? See [https://charset-normalizer.readthedocs.io/en/latest/user/miscellaneous.html](https://charset-normalizer.readthedocs.io/en/latest/user/miscellaneous.html) + +## ⚡ Performance + +This package offer better performance than its counterpart Chardet. Here are some numbers. + +| Package | Accuracy | Mean per file (ms) | File per sec (est) | +|-----------------------------------------------|:--------:|:------------------:|:------------------:| +| [chardet](https://github.com/chardet/chardet) | 86 % | 200 ms | 5 file/sec | +| charset-normalizer | **98 %** | **10 ms** | 100 file/sec | + +| Package | 99th percentile | 95th percentile | 50th percentile | +|-----------------------------------------------|:---------------:|:---------------:|:---------------:| +| [chardet](https://github.com/chardet/chardet) | 1200 ms | 287 ms | 23 ms | +| charset-normalizer | 100 ms | 50 ms | 5 ms | + +Chardet's performance on larger file (1MB+) are very poor. Expect huge difference on large payload. + +> Stats are generated using 400+ files using default parameters. More details on used files, see GHA workflows. +> And yes, these results might change at any time. The dataset can be updated to include more files. +> The actual delays heavily depends on your CPU capabilities. The factors should remain the same. +> Keep in mind that the stats are generous and that Chardet accuracy vs our is measured using Chardet initial capability +> (eg. Supported Encoding) Challenge-them if you want. + +## ✨ Installation + +Using pip: + +```sh +pip install charset-normalizer -U +``` + +## 🚀 Basic Usage + +### CLI +This package comes with a CLI. + +``` +usage: normalizer [-h] [-v] [-a] [-n] [-m] [-r] [-f] [-t THRESHOLD] + file [file ...] + +The Real First Universal Charset Detector. Discover originating encoding used +on text file. Normalize text to unicode. + +positional arguments: + files File(s) to be analysed + +optional arguments: + -h, --help show this help message and exit + -v, --verbose Display complementary information about file if any. + Stdout will contain logs about the detection process. + -a, --with-alternative + Output complementary possibilities if any. Top-level + JSON WILL be a list. + -n, --normalize Permit to normalize input file. If not set, program + does not write anything. + -m, --minimal Only output the charset detected to STDOUT. Disabling + JSON output. + -r, --replace Replace file when trying to normalize it instead of + creating a new one. + -f, --force Replace file without asking if you are sure, use this + flag with caution. + -t THRESHOLD, --threshold THRESHOLD + Define a custom maximum amount of chaos allowed in + decoded content. 0. <= chaos <= 1. + --version Show version information and exit. +``` + +```bash +normalizer ./data/sample.1.fr.srt +``` + +or + +```bash +python -m charset_normalizer ./data/sample.1.fr.srt +``` + +🎉 Since version 1.4.0 the CLI produce easily usable stdout result in JSON format. + +```json +{ + "path": "/home/default/projects/charset_normalizer/data/sample.1.fr.srt", + "encoding": "cp1252", + "encoding_aliases": [ + "1252", + "windows_1252" + ], + "alternative_encodings": [ + "cp1254", + "cp1256", + "cp1258", + "iso8859_14", + "iso8859_15", + "iso8859_16", + "iso8859_3", + "iso8859_9", + "latin_1", + "mbcs" + ], + "language": "French", + "alphabets": [ + "Basic Latin", + "Latin-1 Supplement" + ], + "has_sig_or_bom": false, + "chaos": 0.149, + "coherence": 97.152, + "unicode_path": null, + "is_preferred": true +} +``` + +### Python +*Just print out normalized text* +```python +from charset_normalizer import from_path + +results = from_path('./my_subtitle.srt') + +print(str(results.best())) +``` + +*Upgrade your code without effort* +```python +from charset_normalizer import detect +``` + +The above code will behave the same as **chardet**. We ensure that we offer the best (reasonable) BC result possible. + +See the docs for advanced usage : [readthedocs.io](https://charset-normalizer.readthedocs.io/en/latest/) + +## 😇 Why + +When I started using Chardet, I noticed that it was not suited to my expectations, and I wanted to propose a +reliable alternative using a completely different method. Also! I never back down on a good challenge! + +I **don't care** about the **originating charset** encoding, because **two different tables** can +produce **two identical rendered string.** +What I want is to get readable text, the best I can. + +In a way, **I'm brute forcing text decoding.** How cool is that ? 😎 + +Don't confuse package **ftfy** with charset-normalizer or chardet. ftfy goal is to repair unicode string whereas charset-normalizer to convert raw file in unknown encoding to unicode. + +## 🍰 How + + - Discard all charset encoding table that could not fit the binary content. + - Measure noise, or the mess once opened (by chunks) with a corresponding charset encoding. + - Extract matches with the lowest mess detected. + - Additionally, we measure coherence / probe for a language. + +**Wait a minute**, what is noise/mess and coherence according to **YOU ?** + +*Noise :* I opened hundred of text files, **written by humans**, with the wrong encoding table. **I observed**, then +**I established** some ground rules about **what is obvious** when **it seems like** a mess. + I know that my interpretation of what is noise is probably incomplete, feel free to contribute in order to + improve or rewrite it. + +*Coherence :* For each language there is on earth, we have computed ranked letter appearance occurrences (the best we can). So I thought +that intel is worth something here. So I use those records against decoded text to check if I can detect intelligent design. + +## ⚡ Known limitations + + - Language detection is unreliable when text contains two or more languages sharing identical letters. (eg. HTML (english tags) + Turkish content (Sharing Latin characters)) + - Every charset detector heavily depends on sufficient content. In common cases, do not bother run detection on very tiny content. + +## ⚠️ About Python EOLs + +**If you are running:** + +- Python >=2.7,<3.5: Unsupported +- Python 3.5: charset-normalizer < 2.1 +- Python 3.6: charset-normalizer < 3.1 +- Python 3.7: charset-normalizer < 4.0 + +Upgrade your Python interpreter as soon as possible. + +## 👤 Contributing + +Contributions, issues and feature requests are very much welcome.
+Feel free to check [issues page](https://github.com/ousret/charset_normalizer/issues) if you want to contribute. + +## 📝 License + +Copyright © [Ahmed TAHRI @Ousret](https://github.com/Ousret).
+This project is [MIT](https://github.com/Ousret/charset_normalizer/blob/master/LICENSE) licensed. + +Characters frequencies used in this project © 2012 [Denny Vrandečić](http://simia.net/letters/) + +## 💼 For Enterprise + +Professional support for charset-normalizer is available as part of the [Tidelift +Subscription][1]. Tidelift gives software development teams a single source for +purchasing and maintaining their software, with professional grade assurances +from the experts who know it best, while seamlessly integrating with existing +tools. + +[1]: https://tidelift.com/subscription/pkg/pypi-charset-normalizer?utm_source=pypi-charset-normalizer&utm_medium=readme + +# Changelog +All notable changes to charset-normalizer will be documented in this file. This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). + +## [3.3.2](https://github.com/Ousret/charset_normalizer/compare/3.3.1...3.3.2) (2023-10-31) + +### Fixed +- Unintentional memory usage regression when using large payload that match several encoding (#376) +- Regression on some detection case showcased in the documentation (#371) + +### Added +- Noise (md) probe that identify malformed arabic representation due to the presence of letters in isolated form (credit to my wife) + +## [3.3.1](https://github.com/Ousret/charset_normalizer/compare/3.3.0...3.3.1) (2023-10-22) + +### Changed +- Optional mypyc compilation upgraded to version 1.6.1 for Python >= 3.8 +- Improved the general detection reliability based on reports from the community + +## [3.3.0](https://github.com/Ousret/charset_normalizer/compare/3.2.0...3.3.0) (2023-09-30) + +### Added +- Allow to execute the CLI (e.g. normalizer) through `python -m charset_normalizer.cli` or `python -m charset_normalizer` +- Support for 9 forgotten encoding that are supported by Python but unlisted in `encoding.aliases` as they have no alias (#323) + +### Removed +- (internal) Redundant utils.is_ascii function and unused function is_private_use_only +- (internal) charset_normalizer.assets is moved inside charset_normalizer.constant + +### Changed +- (internal) Unicode code blocks in constants are updated using the latest v15.0.0 definition to improve detection +- Optional mypyc compilation upgraded to version 1.5.1 for Python >= 3.8 + +### Fixed +- Unable to properly sort CharsetMatch when both chaos/noise and coherence were close due to an unreachable condition in \_\_lt\_\_ (#350) + +## [3.2.0](https://github.com/Ousret/charset_normalizer/compare/3.1.0...3.2.0) (2023-06-07) + +### Changed +- Typehint for function `from_path` no longer enforce `PathLike` as its first argument +- Minor improvement over the global detection reliability + +### Added +- Introduce function `is_binary` that relies on main capabilities, and optimized to detect binaries +- Propagate `enable_fallback` argument throughout `from_bytes`, `from_path`, and `from_fp` that allow a deeper control over the detection (default True) +- Explicit support for Python 3.12 + +### Fixed +- Edge case detection failure where a file would contain 'very-long' camel cased word (Issue #289) + +## [3.1.0](https://github.com/Ousret/charset_normalizer/compare/3.0.1...3.1.0) (2023-03-06) + +### Added +- Argument `should_rename_legacy` for legacy function `detect` and disregard any new arguments without errors (PR #262) + +### Removed +- Support for Python 3.6 (PR #260) + +### Changed +- Optional speedup provided by mypy/c 1.0.1 + +## [3.0.1](https://github.com/Ousret/charset_normalizer/compare/3.0.0...3.0.1) (2022-11-18) + +### Fixed +- Multi-bytes cutter/chunk generator did not always cut correctly (PR #233) + +### Changed +- Speedup provided by mypy/c 0.990 on Python >= 3.7 + +## [3.0.0](https://github.com/Ousret/charset_normalizer/compare/2.1.1...3.0.0) (2022-10-20) + +### Added +- Extend the capability of explain=True when cp_isolation contains at most two entries (min one), will log in details of the Mess-detector results +- Support for alternative language frequency set in charset_normalizer.assets.FREQUENCIES +- Add parameter `language_threshold` in `from_bytes`, `from_path` and `from_fp` to adjust the minimum expected coherence ratio +- `normalizer --version` now specify if current version provide extra speedup (meaning mypyc compilation whl) + +### Changed +- Build with static metadata using 'build' frontend +- Make the language detection stricter +- Optional: Module `md.py` can be compiled using Mypyc to provide an extra speedup up to 4x faster than v2.1 + +### Fixed +- CLI with opt --normalize fail when using full path for files +- TooManyAccentuatedPlugin induce false positive on the mess detection when too few alpha character have been fed to it +- Sphinx warnings when generating the documentation + +### Removed +- Coherence detector no longer return 'Simple English' instead return 'English' +- Coherence detector no longer return 'Classical Chinese' instead return 'Chinese' +- Breaking: Method `first()` and `best()` from CharsetMatch +- UTF-7 will no longer appear as "detected" without a recognized SIG/mark (is unreliable/conflict with ASCII) +- Breaking: Class aliases CharsetDetector, CharsetDoctor, CharsetNormalizerMatch and CharsetNormalizerMatches +- Breaking: Top-level function `normalize` +- Breaking: Properties `chaos_secondary_pass`, `coherence_non_latin` and `w_counter` from CharsetMatch +- Support for the backport `unicodedata2` + +## [3.0.0rc1](https://github.com/Ousret/charset_normalizer/compare/3.0.0b2...3.0.0rc1) (2022-10-18) + +### Added +- Extend the capability of explain=True when cp_isolation contains at most two entries (min one), will log in details of the Mess-detector results +- Support for alternative language frequency set in charset_normalizer.assets.FREQUENCIES +- Add parameter `language_threshold` in `from_bytes`, `from_path` and `from_fp` to adjust the minimum expected coherence ratio + +### Changed +- Build with static metadata using 'build' frontend +- Make the language detection stricter + +### Fixed +- CLI with opt --normalize fail when using full path for files +- TooManyAccentuatedPlugin induce false positive on the mess detection when too few alpha character have been fed to it + +### Removed +- Coherence detector no longer return 'Simple English' instead return 'English' +- Coherence detector no longer return 'Classical Chinese' instead return 'Chinese' + +## [3.0.0b2](https://github.com/Ousret/charset_normalizer/compare/3.0.0b1...3.0.0b2) (2022-08-21) + +### Added +- `normalizer --version` now specify if current version provide extra speedup (meaning mypyc compilation whl) + +### Removed +- Breaking: Method `first()` and `best()` from CharsetMatch +- UTF-7 will no longer appear as "detected" without a recognized SIG/mark (is unreliable/conflict with ASCII) + +### Fixed +- Sphinx warnings when generating the documentation + +## [3.0.0b1](https://github.com/Ousret/charset_normalizer/compare/2.1.0...3.0.0b1) (2022-08-15) + +### Changed +- Optional: Module `md.py` can be compiled using Mypyc to provide an extra speedup up to 4x faster than v2.1 + +### Removed +- Breaking: Class aliases CharsetDetector, CharsetDoctor, CharsetNormalizerMatch and CharsetNormalizerMatches +- Breaking: Top-level function `normalize` +- Breaking: Properties `chaos_secondary_pass`, `coherence_non_latin` and `w_counter` from CharsetMatch +- Support for the backport `unicodedata2` + +## [2.1.1](https://github.com/Ousret/charset_normalizer/compare/2.1.0...2.1.1) (2022-08-19) + +### Deprecated +- Function `normalize` scheduled for removal in 3.0 + +### Changed +- Removed useless call to decode in fn is_unprintable (#206) + +### Fixed +- Third-party library (i18n xgettext) crashing not recognizing utf_8 (PEP 263) with underscore from [@aleksandernovikov](https://github.com/aleksandernovikov) (#204) + +## [2.1.0](https://github.com/Ousret/charset_normalizer/compare/2.0.12...2.1.0) (2022-06-19) + +### Added +- Output the Unicode table version when running the CLI with `--version` (PR #194) + +### Changed +- Re-use decoded buffer for single byte character sets from [@nijel](https://github.com/nijel) (PR #175) +- Fixing some performance bottlenecks from [@deedy5](https://github.com/deedy5) (PR #183) + +### Fixed +- Workaround potential bug in cpython with Zero Width No-Break Space located in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space (PR #175) +- CLI default threshold aligned with the API threshold from [@oleksandr-kuzmenko](https://github.com/oleksandr-kuzmenko) (PR #181) + +### Removed +- Support for Python 3.5 (PR #192) + +### Deprecated +- Use of backport unicodedata from `unicodedata2` as Python is quickly catching up, scheduled for removal in 3.0 (PR #194) + +## [2.0.12](https://github.com/Ousret/charset_normalizer/compare/2.0.11...2.0.12) (2022-02-12) + +### Fixed +- ASCII miss-detection on rare cases (PR #170) + +## [2.0.11](https://github.com/Ousret/charset_normalizer/compare/2.0.10...2.0.11) (2022-01-30) + +### Added +- Explicit support for Python 3.11 (PR #164) + +### Changed +- The logging behavior have been completely reviewed, now using only TRACE and DEBUG levels (PR #163 #165) + +## [2.0.10](https://github.com/Ousret/charset_normalizer/compare/2.0.9...2.0.10) (2022-01-04) + +### Fixed +- Fallback match entries might lead to UnicodeDecodeError for large bytes sequence (PR #154) + +### Changed +- Skipping the language-detection (CD) on ASCII (PR #155) + +## [2.0.9](https://github.com/Ousret/charset_normalizer/compare/2.0.8...2.0.9) (2021-12-03) + +### Changed +- Moderating the logging impact (since 2.0.8) for specific environments (PR #147) + +### Fixed +- Wrong logging level applied when setting kwarg `explain` to True (PR #146) + +## [2.0.8](https://github.com/Ousret/charset_normalizer/compare/2.0.7...2.0.8) (2021-11-24) +### Changed +- Improvement over Vietnamese detection (PR #126) +- MD improvement on trailing data and long foreign (non-pure latin) data (PR #124) +- Efficiency improvements in cd/alphabet_languages from [@adbar](https://github.com/adbar) (PR #122) +- call sum() without an intermediary list following PEP 289 recommendations from [@adbar](https://github.com/adbar) (PR #129) +- Code style as refactored by Sourcery-AI (PR #131) +- Minor adjustment on the MD around european words (PR #133) +- Remove and replace SRTs from assets / tests (PR #139) +- Initialize the library logger with a `NullHandler` by default from [@nmaynes](https://github.com/nmaynes) (PR #135) +- Setting kwarg `explain` to True will add provisionally (bounded to function lifespan) a specific stream handler (PR #135) + +### Fixed +- Fix large (misleading) sequence giving UnicodeDecodeError (PR #137) +- Avoid using too insignificant chunk (PR #137) + +### Added +- Add and expose function `set_logging_handler` to configure a specific StreamHandler from [@nmaynes](https://github.com/nmaynes) (PR #135) +- Add `CHANGELOG.md` entries, format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) (PR #141) + +## [2.0.7](https://github.com/Ousret/charset_normalizer/compare/2.0.6...2.0.7) (2021-10-11) +### Added +- Add support for Kazakh (Cyrillic) language detection (PR #109) + +### Changed +- Further, improve inferring the language from a given single-byte code page (PR #112) +- Vainly trying to leverage PEP263 when PEP3120 is not supported (PR #116) +- Refactoring for potential performance improvements in loops from [@adbar](https://github.com/adbar) (PR #113) +- Various detection improvement (MD+CD) (PR #117) + +### Removed +- Remove redundant logging entry about detected language(s) (PR #115) + +### Fixed +- Fix a minor inconsistency between Python 3.5 and other versions regarding language detection (PR #117 #102) + +## [2.0.6](https://github.com/Ousret/charset_normalizer/compare/2.0.5...2.0.6) (2021-09-18) +### Fixed +- Unforeseen regression with the loss of the backward-compatibility with some older minor of Python 3.5.x (PR #100) +- Fix CLI crash when using --minimal output in certain cases (PR #103) + +### Changed +- Minor improvement to the detection efficiency (less than 1%) (PR #106 #101) + +## [2.0.5](https://github.com/Ousret/charset_normalizer/compare/2.0.4...2.0.5) (2021-09-14) +### Changed +- The project now comply with: flake8, mypy, isort and black to ensure a better overall quality (PR #81) +- The BC-support with v1.x was improved, the old staticmethods are restored (PR #82) +- The Unicode detection is slightly improved (PR #93) +- Add syntax sugar \_\_bool\_\_ for results CharsetMatches list-container (PR #91) + +### Removed +- The project no longer raise warning on tiny content given for detection, will be simply logged as warning instead (PR #92) + +### Fixed +- In some rare case, the chunks extractor could cut in the middle of a multi-byte character and could mislead the mess detection (PR #95) +- Some rare 'space' characters could trip up the UnprintablePlugin/Mess detection (PR #96) +- The MANIFEST.in was not exhaustive (PR #78) + +## [2.0.4](https://github.com/Ousret/charset_normalizer/compare/2.0.3...2.0.4) (2021-07-30) +### Fixed +- The CLI no longer raise an unexpected exception when no encoding has been found (PR #70) +- Fix accessing the 'alphabets' property when the payload contains surrogate characters (PR #68) +- The logger could mislead (explain=True) on detected languages and the impact of one MBCS match (PR #72) +- Submatch factoring could be wrong in rare edge cases (PR #72) +- Multiple files given to the CLI were ignored when publishing results to STDOUT. (After the first path) (PR #72) +- Fix line endings from CRLF to LF for certain project files (PR #67) + +### Changed +- Adjust the MD to lower the sensitivity, thus improving the global detection reliability (PR #69 #76) +- Allow fallback on specified encoding if any (PR #71) + +## [2.0.3](https://github.com/Ousret/charset_normalizer/compare/2.0.2...2.0.3) (2021-07-16) +### Changed +- Part of the detection mechanism has been improved to be less sensitive, resulting in more accurate detection results. Especially ASCII. (PR #63) +- According to the community wishes, the detection will fall back on ASCII or UTF-8 in a last-resort case. (PR #64) + +## [2.0.2](https://github.com/Ousret/charset_normalizer/compare/2.0.1...2.0.2) (2021-07-15) +### Fixed +- Empty/Too small JSON payload miss-detection fixed. Report from [@tseaver](https://github.com/tseaver) (PR #59) + +### Changed +- Don't inject unicodedata2 into sys.modules from [@akx](https://github.com/akx) (PR #57) + +## [2.0.1](https://github.com/Ousret/charset_normalizer/compare/2.0.0...2.0.1) (2021-07-13) +### Fixed +- Make it work where there isn't a filesystem available, dropping assets frequencies.json. Report from [@sethmlarson](https://github.com/sethmlarson). (PR #55) +- Using explain=False permanently disable the verbose output in the current runtime (PR #47) +- One log entry (language target preemptive) was not show in logs when using explain=True (PR #47) +- Fix undesired exception (ValueError) on getitem of instance CharsetMatches (PR #52) + +### Changed +- Public function normalize default args values were not aligned with from_bytes (PR #53) + +### Added +- You may now use charset aliases in cp_isolation and cp_exclusion arguments (PR #47) + +## [2.0.0](https://github.com/Ousret/charset_normalizer/compare/1.4.1...2.0.0) (2021-07-02) +### Changed +- 4x to 5 times faster than the previous 1.4.0 release. At least 2x faster than Chardet. +- Accent has been made on UTF-8 detection, should perform rather instantaneous. +- The backward compatibility with Chardet has been greatly improved. The legacy detect function returns an identical charset name whenever possible. +- The detection mechanism has been slightly improved, now Turkish content is detected correctly (most of the time) +- The program has been rewritten to ease the readability and maintainability. (+Using static typing)+ +- utf_7 detection has been reinstated. + +### Removed +- This package no longer require anything when used with Python 3.5 (Dropped cached_property) +- Removed support for these languages: Catalan, Esperanto, Kazakh, Baque, Volapük, Azeri, Galician, Nynorsk, Macedonian, and Serbocroatian. +- The exception hook on UnicodeDecodeError has been removed. + +### Deprecated +- Methods coherence_non_latin, w_counter, chaos_secondary_pass of the class CharsetMatch are now deprecated and scheduled for removal in v3.0 + +### Fixed +- The CLI output used the relative path of the file(s). Should be absolute. + +## [1.4.1](https://github.com/Ousret/charset_normalizer/compare/1.4.0...1.4.1) (2021-05-28) +### Fixed +- Logger configuration/usage no longer conflict with others (PR #44) + +## [1.4.0](https://github.com/Ousret/charset_normalizer/compare/1.3.9...1.4.0) (2021-05-21) +### Removed +- Using standard logging instead of using the package loguru. +- Dropping nose test framework in favor of the maintained pytest. +- Choose to not use dragonmapper package to help with gibberish Chinese/CJK text. +- Require cached_property only for Python 3.5 due to constraint. Dropping for every other interpreter version. +- Stop support for UTF-7 that does not contain a SIG. +- Dropping PrettyTable, replaced with pure JSON output in CLI. + +### Fixed +- BOM marker in a CharsetNormalizerMatch instance could be False in rare cases even if obviously present. Due to the sub-match factoring process. +- Not searching properly for the BOM when trying utf32/16 parent codec. + +### Changed +- Improving the package final size by compressing frequencies.json. +- Huge improvement over the larges payload. + +### Added +- CLI now produces JSON consumable output. +- Return ASCII if given sequences fit. Given reasonable confidence. + +## [1.3.9](https://github.com/Ousret/charset_normalizer/compare/1.3.8...1.3.9) (2021-05-13) + +### Fixed +- In some very rare cases, you may end up getting encode/decode errors due to a bad bytes payload (PR #40) + +## [1.3.8](https://github.com/Ousret/charset_normalizer/compare/1.3.7...1.3.8) (2021-05-12) + +### Fixed +- Empty given payload for detection may cause an exception if trying to access the `alphabets` property. (PR #39) + +## [1.3.7](https://github.com/Ousret/charset_normalizer/compare/1.3.6...1.3.7) (2021-05-12) + +### Fixed +- The legacy detect function should return UTF-8-SIG if sig is present in the payload. (PR #38) + +## [1.3.6](https://github.com/Ousret/charset_normalizer/compare/1.3.5...1.3.6) (2021-02-09) + +### Changed +- Amend the previous release to allow prettytable 2.0 (PR #35) + +## [1.3.5](https://github.com/Ousret/charset_normalizer/compare/1.3.4...1.3.5) (2021-02-08) + +### Fixed +- Fix error while using the package with a python pre-release interpreter (PR #33) + +### Changed +- Dependencies refactoring, constraints revised. + +### Added +- Add python 3.9 and 3.10 to the supported interpreters + +MIT License + +Copyright (c) 2019 TAHRI Ahmed R. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/RECORD b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/RECORD new file mode 100644 index 00000000..ca6ba080 --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/RECORD @@ -0,0 +1,36 @@ +../../bin/normalizer.exe,sha256=iKDTwkacNS1hbqXa0xvRQFUnNf0vcz8NrP_sLBiyMQY,108469 +charset_normalizer-3.3.2.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +charset_normalizer-3.3.2.dist-info/LICENSE,sha256=znnj1Var_lZ-hzOvD5W50wcQDp9qls3SD2xIau88ufc,1090 +charset_normalizer-3.3.2.dist-info/METADATA,sha256=hHDqDpXmQH3f8XSn30NlqB3R3NuhJzXC0zABqFwA6Nk,34233 +charset_normalizer-3.3.2.dist-info/RECORD,, +charset_normalizer-3.3.2.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +charset_normalizer-3.3.2.dist-info/WHEEL,sha256=badvNS-y9fEq0X-qzdZYvql_JFjI7Xfw-wR8FsjoK0I,102 +charset_normalizer-3.3.2.dist-info/entry_points.txt,sha256=ADSTKrkXZ3hhdOVFi6DcUEHQRS0xfxDIE_pEz4wLIXA,65 +charset_normalizer-3.3.2.dist-info/top_level.txt,sha256=7ASyzePr8_xuZWJsnqJjIBtyV8vhEo0wBCv1MPRRi3Q,19 +charset_normalizer/__init__.py,sha256=m1cUEsb9K5v831m9P_lv2JlUEKD7MhxL7fxw3hn75o4,1623 +charset_normalizer/__main__.py,sha256=nVnMo31hTPN2Yy045GJIvHj3dKDJz4dAQR3cUSdvYyc,77 +charset_normalizer/__pycache__/__init__.cpython-311.pyc,, +charset_normalizer/__pycache__/__main__.cpython-311.pyc,, +charset_normalizer/__pycache__/api.cpython-311.pyc,, +charset_normalizer/__pycache__/cd.cpython-311.pyc,, +charset_normalizer/__pycache__/constant.cpython-311.pyc,, +charset_normalizer/__pycache__/legacy.cpython-311.pyc,, +charset_normalizer/__pycache__/md.cpython-311.pyc,, +charset_normalizer/__pycache__/models.cpython-311.pyc,, +charset_normalizer/__pycache__/utils.cpython-311.pyc,, +charset_normalizer/__pycache__/version.cpython-311.pyc,, +charset_normalizer/api.py,sha256=qFL0frUrcfcYEJmGpqoJ4Af68ToVue3f5SK1gp8UC5Q,21723 +charset_normalizer/cd.py,sha256=Yfk3sbee0Xqo1-vmQYbOqM51-SajXPLzFVG89nTsZzc,12955 +charset_normalizer/cli/__init__.py,sha256=COwP8fK2qbuldMem2lL81JieY-PIA2G2GZ5IdAPMPFA,106 +charset_normalizer/cli/__main__.py,sha256=rs-cBipBzr7d0TAaUa0nG4qrjXhdddeCVB-f6Xt_wS0,10040 +charset_normalizer/cli/__pycache__/__init__.cpython-311.pyc,, +charset_normalizer/cli/__pycache__/__main__.cpython-311.pyc,, +charset_normalizer/constant.py,sha256=2tVrXQ9cvC8jt0b8gZzRXvXte1pVbRra0A5dOWDQSao,42476 +charset_normalizer/legacy.py,sha256=KbJxEpu7g6zE2uXSB3T-3178cgiSQdVJlJmY-gv3EAM,2125 +charset_normalizer/md.cp311-win_amd64.pyd,sha256=eQoRqicFI8LvpgIc5PmUw8Wmfo6qrwIHTVMIQgtovZQ,10752 +charset_normalizer/md.py,sha256=F7S001NdPgkAoma2w598Idx2clW9ljXlRIYKZQKsCQA,20239 +charset_normalizer/md__mypyc.cp311-win_amd64.pyd,sha256=PZiTqnnv0T2B_NYU6e9ftqrZBWm-7e1REt5e1aw891M,119296 +charset_normalizer/models.py,sha256=AlehuyGDE74jhryjg6TTkYh1MCntfxXFfGhTi0esu-Y,11964 +charset_normalizer/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +charset_normalizer/utils.py,sha256=jjvfSXHJD6QPgxcxIx4utsOFx3PxFssWef1IYxA3uKs,12315 +charset_normalizer/version.py,sha256=q3fF12xGlBuaub5kroTZt7lBPQLO3kFvMnkoEnt-6YA,85 diff --git a/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/REQUESTED b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/REQUESTED new file mode 100644 index 00000000..e69de29b diff --git a/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/WHEEL b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/WHEEL new file mode 100644 index 00000000..6d160455 --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.41.2) +Root-Is-Purelib: false +Tag: cp311-cp311-win_amd64 + diff --git a/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/entry_points.txt b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/entry_points.txt new file mode 100644 index 00000000..65619e73 --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/entry_points.txt @@ -0,0 +1,2 @@ +[console_scripts] +normalizer = charset_normalizer.cli:cli_detect diff --git a/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/top_level.txt b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/top_level.txt new file mode 100644 index 00000000..66958f0a --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer-3.3.2.dist-info/top_level.txt @@ -0,0 +1 @@ +charset_normalizer diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/__init__.py b/templates/skills/file_manager/dependencies/charset_normalizer/__init__.py new file mode 100644 index 00000000..55991fc3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/__init__.py @@ -0,0 +1,46 @@ +# -*- coding: utf-8 -*- +""" +Charset-Normalizer +~~~~~~~~~~~~~~ +The Real First Universal Charset Detector. +A library that helps you read text from an unknown charset encoding. +Motivated by chardet, This package is trying to resolve the issue by taking a new approach. +All IANA character set names for which the Python core library provides codecs are supported. + +Basic usage: + >>> from charset_normalizer import from_bytes + >>> results = from_bytes('Bсеки човек има право на образование. Oбразованието!'.encode('utf_8')) + >>> best_guess = results.best() + >>> str(best_guess) + 'Bсеки човек има право на образование. Oбразованието!' + +Others methods and usages are available - see the full documentation +at . +:copyright: (c) 2021 by Ahmed TAHRI +:license: MIT, see LICENSE for more details. +""" +import logging + +from .api import from_bytes, from_fp, from_path, is_binary +from .legacy import detect +from .models import CharsetMatch, CharsetMatches +from .utils import set_logging_handler +from .version import VERSION, __version__ + +__all__ = ( + "from_fp", + "from_path", + "from_bytes", + "is_binary", + "detect", + "CharsetMatch", + "CharsetMatches", + "__version__", + "VERSION", + "set_logging_handler", +) + +# Attach a NullHandler to the top level logger by default +# https://docs.python.org/3.3/howto/logging.html#configuring-logging-for-a-library + +logging.getLogger("charset_normalizer").addHandler(logging.NullHandler()) diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/__main__.py b/templates/skills/file_manager/dependencies/charset_normalizer/__main__.py new file mode 100644 index 00000000..beae2ef7 --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/__main__.py @@ -0,0 +1,4 @@ +from .cli import cli_detect + +if __name__ == "__main__": + cli_detect() diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/api.py b/templates/skills/file_manager/dependencies/charset_normalizer/api.py new file mode 100644 index 00000000..0ba08e3a --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/api.py @@ -0,0 +1,626 @@ +import logging +from os import PathLike +from typing import BinaryIO, List, Optional, Set, Union + +from .cd import ( + coherence_ratio, + encoding_languages, + mb_encoding_languages, + merge_coherence_ratios, +) +from .constant import IANA_SUPPORTED, TOO_BIG_SEQUENCE, TOO_SMALL_SEQUENCE, TRACE +from .md import mess_ratio +from .models import CharsetMatch, CharsetMatches +from .utils import ( + any_specified_encoding, + cut_sequence_chunks, + iana_name, + identify_sig_or_bom, + is_cp_similar, + is_multi_byte_encoding, + should_strip_sig_or_bom, +) + +# Will most likely be controversial +# logging.addLevelName(TRACE, "TRACE") +logger = logging.getLogger("charset_normalizer") +explain_handler = logging.StreamHandler() +explain_handler.setFormatter( + logging.Formatter("%(asctime)s | %(levelname)s | %(message)s") +) + + +def from_bytes( + sequences: Union[bytes, bytearray], + steps: int = 5, + chunk_size: int = 512, + threshold: float = 0.2, + cp_isolation: Optional[List[str]] = None, + cp_exclusion: Optional[List[str]] = None, + preemptive_behaviour: bool = True, + explain: bool = False, + language_threshold: float = 0.1, + enable_fallback: bool = True, +) -> CharsetMatches: + """ + Given a raw bytes sequence, return the best possibles charset usable to render str objects. + If there is no results, it is a strong indicator that the source is binary/not text. + By default, the process will extract 5 blocks of 512o each to assess the mess and coherence of a given sequence. + And will give up a particular code page after 20% of measured mess. Those criteria are customizable at will. + + The preemptive behavior DOES NOT replace the traditional detection workflow, it prioritize a particular code page + but never take it for granted. Can improve the performance. + + You may want to focus your attention to some code page or/and not others, use cp_isolation and cp_exclusion for that + purpose. + + This function will strip the SIG in the payload/sequence every time except on UTF-16, UTF-32. + By default the library does not setup any handler other than the NullHandler, if you choose to set the 'explain' + toggle to True it will alter the logger configuration to add a StreamHandler that is suitable for debugging. + Custom logging format and handler can be set manually. + """ + + if not isinstance(sequences, (bytearray, bytes)): + raise TypeError( + "Expected object of type bytes or bytearray, got: {0}".format( + type(sequences) + ) + ) + + if explain: + previous_logger_level: int = logger.level + logger.addHandler(explain_handler) + logger.setLevel(TRACE) + + length: int = len(sequences) + + if length == 0: + logger.debug("Encoding detection on empty bytes, assuming utf_8 intention.") + if explain: + logger.removeHandler(explain_handler) + logger.setLevel(previous_logger_level or logging.WARNING) + return CharsetMatches([CharsetMatch(sequences, "utf_8", 0.0, False, [], "")]) + + if cp_isolation is not None: + logger.log( + TRACE, + "cp_isolation is set. use this flag for debugging purpose. " + "limited list of encoding allowed : %s.", + ", ".join(cp_isolation), + ) + cp_isolation = [iana_name(cp, False) for cp in cp_isolation] + else: + cp_isolation = [] + + if cp_exclusion is not None: + logger.log( + TRACE, + "cp_exclusion is set. use this flag for debugging purpose. " + "limited list of encoding excluded : %s.", + ", ".join(cp_exclusion), + ) + cp_exclusion = [iana_name(cp, False) for cp in cp_exclusion] + else: + cp_exclusion = [] + + if length <= (chunk_size * steps): + logger.log( + TRACE, + "override steps (%i) and chunk_size (%i) as content does not fit (%i byte(s) given) parameters.", + steps, + chunk_size, + length, + ) + steps = 1 + chunk_size = length + + if steps > 1 and length / steps < chunk_size: + chunk_size = int(length / steps) + + is_too_small_sequence: bool = len(sequences) < TOO_SMALL_SEQUENCE + is_too_large_sequence: bool = len(sequences) >= TOO_BIG_SEQUENCE + + if is_too_small_sequence: + logger.log( + TRACE, + "Trying to detect encoding from a tiny portion of ({}) byte(s).".format( + length + ), + ) + elif is_too_large_sequence: + logger.log( + TRACE, + "Using lazy str decoding because the payload is quite large, ({}) byte(s).".format( + length + ), + ) + + prioritized_encodings: List[str] = [] + + specified_encoding: Optional[str] = ( + any_specified_encoding(sequences) if preemptive_behaviour else None + ) + + if specified_encoding is not None: + prioritized_encodings.append(specified_encoding) + logger.log( + TRACE, + "Detected declarative mark in sequence. Priority +1 given for %s.", + specified_encoding, + ) + + tested: Set[str] = set() + tested_but_hard_failure: List[str] = [] + tested_but_soft_failure: List[str] = [] + + fallback_ascii: Optional[CharsetMatch] = None + fallback_u8: Optional[CharsetMatch] = None + fallback_specified: Optional[CharsetMatch] = None + + results: CharsetMatches = CharsetMatches() + + sig_encoding, sig_payload = identify_sig_or_bom(sequences) + + if sig_encoding is not None: + prioritized_encodings.append(sig_encoding) + logger.log( + TRACE, + "Detected a SIG or BOM mark on first %i byte(s). Priority +1 given for %s.", + len(sig_payload), + sig_encoding, + ) + + prioritized_encodings.append("ascii") + + if "utf_8" not in prioritized_encodings: + prioritized_encodings.append("utf_8") + + for encoding_iana in prioritized_encodings + IANA_SUPPORTED: + if cp_isolation and encoding_iana not in cp_isolation: + continue + + if cp_exclusion and encoding_iana in cp_exclusion: + continue + + if encoding_iana in tested: + continue + + tested.add(encoding_iana) + + decoded_payload: Optional[str] = None + bom_or_sig_available: bool = sig_encoding == encoding_iana + strip_sig_or_bom: bool = bom_or_sig_available and should_strip_sig_or_bom( + encoding_iana + ) + + if encoding_iana in {"utf_16", "utf_32"} and not bom_or_sig_available: + logger.log( + TRACE, + "Encoding %s won't be tested as-is because it require a BOM. Will try some sub-encoder LE/BE.", + encoding_iana, + ) + continue + if encoding_iana in {"utf_7"} and not bom_or_sig_available: + logger.log( + TRACE, + "Encoding %s won't be tested as-is because detection is unreliable without BOM/SIG.", + encoding_iana, + ) + continue + + try: + is_multi_byte_decoder: bool = is_multi_byte_encoding(encoding_iana) + except (ModuleNotFoundError, ImportError): + logger.log( + TRACE, + "Encoding %s does not provide an IncrementalDecoder", + encoding_iana, + ) + continue + + try: + if is_too_large_sequence and is_multi_byte_decoder is False: + str( + sequences[: int(50e4)] + if strip_sig_or_bom is False + else sequences[len(sig_payload) : int(50e4)], + encoding=encoding_iana, + ) + else: + decoded_payload = str( + sequences + if strip_sig_or_bom is False + else sequences[len(sig_payload) :], + encoding=encoding_iana, + ) + except (UnicodeDecodeError, LookupError) as e: + if not isinstance(e, LookupError): + logger.log( + TRACE, + "Code page %s does not fit given bytes sequence at ALL. %s", + encoding_iana, + str(e), + ) + tested_but_hard_failure.append(encoding_iana) + continue + + similar_soft_failure_test: bool = False + + for encoding_soft_failed in tested_but_soft_failure: + if is_cp_similar(encoding_iana, encoding_soft_failed): + similar_soft_failure_test = True + break + + if similar_soft_failure_test: + logger.log( + TRACE, + "%s is deemed too similar to code page %s and was consider unsuited already. Continuing!", + encoding_iana, + encoding_soft_failed, + ) + continue + + r_ = range( + 0 if not bom_or_sig_available else len(sig_payload), + length, + int(length / steps), + ) + + multi_byte_bonus: bool = ( + is_multi_byte_decoder + and decoded_payload is not None + and len(decoded_payload) < length + ) + + if multi_byte_bonus: + logger.log( + TRACE, + "Code page %s is a multi byte encoding table and it appear that at least one character " + "was encoded using n-bytes.", + encoding_iana, + ) + + max_chunk_gave_up: int = int(len(r_) / 4) + + max_chunk_gave_up = max(max_chunk_gave_up, 2) + early_stop_count: int = 0 + lazy_str_hard_failure = False + + md_chunks: List[str] = [] + md_ratios = [] + + try: + for chunk in cut_sequence_chunks( + sequences, + encoding_iana, + r_, + chunk_size, + bom_or_sig_available, + strip_sig_or_bom, + sig_payload, + is_multi_byte_decoder, + decoded_payload, + ): + md_chunks.append(chunk) + + md_ratios.append( + mess_ratio( + chunk, + threshold, + explain is True and 1 <= len(cp_isolation) <= 2, + ) + ) + + if md_ratios[-1] >= threshold: + early_stop_count += 1 + + if (early_stop_count >= max_chunk_gave_up) or ( + bom_or_sig_available and strip_sig_or_bom is False + ): + break + except ( + UnicodeDecodeError + ) as e: # Lazy str loading may have missed something there + logger.log( + TRACE, + "LazyStr Loading: After MD chunk decode, code page %s does not fit given bytes sequence at ALL. %s", + encoding_iana, + str(e), + ) + early_stop_count = max_chunk_gave_up + lazy_str_hard_failure = True + + # We might want to check the sequence again with the whole content + # Only if initial MD tests passes + if ( + not lazy_str_hard_failure + and is_too_large_sequence + and not is_multi_byte_decoder + ): + try: + sequences[int(50e3) :].decode(encoding_iana, errors="strict") + except UnicodeDecodeError as e: + logger.log( + TRACE, + "LazyStr Loading: After final lookup, code page %s does not fit given bytes sequence at ALL. %s", + encoding_iana, + str(e), + ) + tested_but_hard_failure.append(encoding_iana) + continue + + mean_mess_ratio: float = sum(md_ratios) / len(md_ratios) if md_ratios else 0.0 + if mean_mess_ratio >= threshold or early_stop_count >= max_chunk_gave_up: + tested_but_soft_failure.append(encoding_iana) + logger.log( + TRACE, + "%s was excluded because of initial chaos probing. Gave up %i time(s). " + "Computed mean chaos is %f %%.", + encoding_iana, + early_stop_count, + round(mean_mess_ratio * 100, ndigits=3), + ) + # Preparing those fallbacks in case we got nothing. + if ( + enable_fallback + and encoding_iana in ["ascii", "utf_8", specified_encoding] + and not lazy_str_hard_failure + ): + fallback_entry = CharsetMatch( + sequences, encoding_iana, threshold, False, [], decoded_payload + ) + if encoding_iana == specified_encoding: + fallback_specified = fallback_entry + elif encoding_iana == "ascii": + fallback_ascii = fallback_entry + else: + fallback_u8 = fallback_entry + continue + + logger.log( + TRACE, + "%s passed initial chaos probing. Mean measured chaos is %f %%", + encoding_iana, + round(mean_mess_ratio * 100, ndigits=3), + ) + + if not is_multi_byte_decoder: + target_languages: List[str] = encoding_languages(encoding_iana) + else: + target_languages = mb_encoding_languages(encoding_iana) + + if target_languages: + logger.log( + TRACE, + "{} should target any language(s) of {}".format( + encoding_iana, str(target_languages) + ), + ) + + cd_ratios = [] + + # We shall skip the CD when its about ASCII + # Most of the time its not relevant to run "language-detection" on it. + if encoding_iana != "ascii": + for chunk in md_chunks: + chunk_languages = coherence_ratio( + chunk, + language_threshold, + ",".join(target_languages) if target_languages else None, + ) + + cd_ratios.append(chunk_languages) + + cd_ratios_merged = merge_coherence_ratios(cd_ratios) + + if cd_ratios_merged: + logger.log( + TRACE, + "We detected language {} using {}".format( + cd_ratios_merged, encoding_iana + ), + ) + + results.append( + CharsetMatch( + sequences, + encoding_iana, + mean_mess_ratio, + bom_or_sig_available, + cd_ratios_merged, + decoded_payload, + ) + ) + + if ( + encoding_iana in [specified_encoding, "ascii", "utf_8"] + and mean_mess_ratio < 0.1 + ): + logger.debug( + "Encoding detection: %s is most likely the one.", encoding_iana + ) + if explain: + logger.removeHandler(explain_handler) + logger.setLevel(previous_logger_level) + return CharsetMatches([results[encoding_iana]]) + + if encoding_iana == sig_encoding: + logger.debug( + "Encoding detection: %s is most likely the one as we detected a BOM or SIG within " + "the beginning of the sequence.", + encoding_iana, + ) + if explain: + logger.removeHandler(explain_handler) + logger.setLevel(previous_logger_level) + return CharsetMatches([results[encoding_iana]]) + + if len(results) == 0: + if fallback_u8 or fallback_ascii or fallback_specified: + logger.log( + TRACE, + "Nothing got out of the detection process. Using ASCII/UTF-8/Specified fallback.", + ) + + if fallback_specified: + logger.debug( + "Encoding detection: %s will be used as a fallback match", + fallback_specified.encoding, + ) + results.append(fallback_specified) + elif ( + (fallback_u8 and fallback_ascii is None) + or ( + fallback_u8 + and fallback_ascii + and fallback_u8.fingerprint != fallback_ascii.fingerprint + ) + or (fallback_u8 is not None) + ): + logger.debug("Encoding detection: utf_8 will be used as a fallback match") + results.append(fallback_u8) + elif fallback_ascii: + logger.debug("Encoding detection: ascii will be used as a fallback match") + results.append(fallback_ascii) + + if results: + logger.debug( + "Encoding detection: Found %s as plausible (best-candidate) for content. With %i alternatives.", + results.best().encoding, # type: ignore + len(results) - 1, + ) + else: + logger.debug("Encoding detection: Unable to determine any suitable charset.") + + if explain: + logger.removeHandler(explain_handler) + logger.setLevel(previous_logger_level) + + return results + + +def from_fp( + fp: BinaryIO, + steps: int = 5, + chunk_size: int = 512, + threshold: float = 0.20, + cp_isolation: Optional[List[str]] = None, + cp_exclusion: Optional[List[str]] = None, + preemptive_behaviour: bool = True, + explain: bool = False, + language_threshold: float = 0.1, + enable_fallback: bool = True, +) -> CharsetMatches: + """ + Same thing than the function from_bytes but using a file pointer that is already ready. + Will not close the file pointer. + """ + return from_bytes( + fp.read(), + steps, + chunk_size, + threshold, + cp_isolation, + cp_exclusion, + preemptive_behaviour, + explain, + language_threshold, + enable_fallback, + ) + + +def from_path( + path: Union[str, bytes, PathLike], # type: ignore[type-arg] + steps: int = 5, + chunk_size: int = 512, + threshold: float = 0.20, + cp_isolation: Optional[List[str]] = None, + cp_exclusion: Optional[List[str]] = None, + preemptive_behaviour: bool = True, + explain: bool = False, + language_threshold: float = 0.1, + enable_fallback: bool = True, +) -> CharsetMatches: + """ + Same thing than the function from_bytes but with one extra step. Opening and reading given file path in binary mode. + Can raise IOError. + """ + with open(path, "rb") as fp: + return from_fp( + fp, + steps, + chunk_size, + threshold, + cp_isolation, + cp_exclusion, + preemptive_behaviour, + explain, + language_threshold, + enable_fallback, + ) + + +def is_binary( + fp_or_path_or_payload: Union[PathLike, str, BinaryIO, bytes], # type: ignore[type-arg] + steps: int = 5, + chunk_size: int = 512, + threshold: float = 0.20, + cp_isolation: Optional[List[str]] = None, + cp_exclusion: Optional[List[str]] = None, + preemptive_behaviour: bool = True, + explain: bool = False, + language_threshold: float = 0.1, + enable_fallback: bool = False, +) -> bool: + """ + Detect if the given input (file, bytes, or path) points to a binary file. aka. not a string. + Based on the same main heuristic algorithms and default kwargs at the sole exception that fallbacks match + are disabled to be stricter around ASCII-compatible but unlikely to be a string. + """ + if isinstance(fp_or_path_or_payload, (str, PathLike)): + guesses = from_path( + fp_or_path_or_payload, + steps=steps, + chunk_size=chunk_size, + threshold=threshold, + cp_isolation=cp_isolation, + cp_exclusion=cp_exclusion, + preemptive_behaviour=preemptive_behaviour, + explain=explain, + language_threshold=language_threshold, + enable_fallback=enable_fallback, + ) + elif isinstance( + fp_or_path_or_payload, + ( + bytes, + bytearray, + ), + ): + guesses = from_bytes( + fp_or_path_or_payload, + steps=steps, + chunk_size=chunk_size, + threshold=threshold, + cp_isolation=cp_isolation, + cp_exclusion=cp_exclusion, + preemptive_behaviour=preemptive_behaviour, + explain=explain, + language_threshold=language_threshold, + enable_fallback=enable_fallback, + ) + else: + guesses = from_fp( + fp_or_path_or_payload, + steps=steps, + chunk_size=chunk_size, + threshold=threshold, + cp_isolation=cp_isolation, + cp_exclusion=cp_exclusion, + preemptive_behaviour=preemptive_behaviour, + explain=explain, + language_threshold=language_threshold, + enable_fallback=enable_fallback, + ) + + return not guesses diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/cd.py b/templates/skills/file_manager/dependencies/charset_normalizer/cd.py new file mode 100644 index 00000000..4ea6760c --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/cd.py @@ -0,0 +1,395 @@ +import importlib +from codecs import IncrementalDecoder +from collections import Counter +from functools import lru_cache +from typing import Counter as TypeCounter, Dict, List, Optional, Tuple + +from .constant import ( + FREQUENCIES, + KO_NAMES, + LANGUAGE_SUPPORTED_COUNT, + TOO_SMALL_SEQUENCE, + ZH_NAMES, +) +from .md import is_suspiciously_successive_range +from .models import CoherenceMatches +from .utils import ( + is_accentuated, + is_latin, + is_multi_byte_encoding, + is_unicode_range_secondary, + unicode_range, +) + + +def encoding_unicode_range(iana_name: str) -> List[str]: + """ + Return associated unicode ranges in a single byte code page. + """ + if is_multi_byte_encoding(iana_name): + raise IOError("Function not supported on multi-byte code page") + + decoder = importlib.import_module( + "encodings.{}".format(iana_name) + ).IncrementalDecoder + + p: IncrementalDecoder = decoder(errors="ignore") + seen_ranges: Dict[str, int] = {} + character_count: int = 0 + + for i in range(0x40, 0xFF): + chunk: str = p.decode(bytes([i])) + + if chunk: + character_range: Optional[str] = unicode_range(chunk) + + if character_range is None: + continue + + if is_unicode_range_secondary(character_range) is False: + if character_range not in seen_ranges: + seen_ranges[character_range] = 0 + seen_ranges[character_range] += 1 + character_count += 1 + + return sorted( + [ + character_range + for character_range in seen_ranges + if seen_ranges[character_range] / character_count >= 0.15 + ] + ) + + +def unicode_range_languages(primary_range: str) -> List[str]: + """ + Return inferred languages used with a unicode range. + """ + languages: List[str] = [] + + for language, characters in FREQUENCIES.items(): + for character in characters: + if unicode_range(character) == primary_range: + languages.append(language) + break + + return languages + + +@lru_cache() +def encoding_languages(iana_name: str) -> List[str]: + """ + Single-byte encoding language association. Some code page are heavily linked to particular language(s). + This function does the correspondence. + """ + unicode_ranges: List[str] = encoding_unicode_range(iana_name) + primary_range: Optional[str] = None + + for specified_range in unicode_ranges: + if "Latin" not in specified_range: + primary_range = specified_range + break + + if primary_range is None: + return ["Latin Based"] + + return unicode_range_languages(primary_range) + + +@lru_cache() +def mb_encoding_languages(iana_name: str) -> List[str]: + """ + Multi-byte encoding language association. Some code page are heavily linked to particular language(s). + This function does the correspondence. + """ + if ( + iana_name.startswith("shift_") + or iana_name.startswith("iso2022_jp") + or iana_name.startswith("euc_j") + or iana_name == "cp932" + ): + return ["Japanese"] + if iana_name.startswith("gb") or iana_name in ZH_NAMES: + return ["Chinese"] + if iana_name.startswith("iso2022_kr") or iana_name in KO_NAMES: + return ["Korean"] + + return [] + + +@lru_cache(maxsize=LANGUAGE_SUPPORTED_COUNT) +def get_target_features(language: str) -> Tuple[bool, bool]: + """ + Determine main aspects from a supported language if it contains accents and if is pure Latin. + """ + target_have_accents: bool = False + target_pure_latin: bool = True + + for character in FREQUENCIES[language]: + if not target_have_accents and is_accentuated(character): + target_have_accents = True + if target_pure_latin and is_latin(character) is False: + target_pure_latin = False + + return target_have_accents, target_pure_latin + + +def alphabet_languages( + characters: List[str], ignore_non_latin: bool = False +) -> List[str]: + """ + Return associated languages associated to given characters. + """ + languages: List[Tuple[str, float]] = [] + + source_have_accents = any(is_accentuated(character) for character in characters) + + for language, language_characters in FREQUENCIES.items(): + target_have_accents, target_pure_latin = get_target_features(language) + + if ignore_non_latin and target_pure_latin is False: + continue + + if target_have_accents is False and source_have_accents: + continue + + character_count: int = len(language_characters) + + character_match_count: int = len( + [c for c in language_characters if c in characters] + ) + + ratio: float = character_match_count / character_count + + if ratio >= 0.2: + languages.append((language, ratio)) + + languages = sorted(languages, key=lambda x: x[1], reverse=True) + + return [compatible_language[0] for compatible_language in languages] + + +def characters_popularity_compare( + language: str, ordered_characters: List[str] +) -> float: + """ + Determine if a ordered characters list (by occurrence from most appearance to rarest) match a particular language. + The result is a ratio between 0. (absolutely no correspondence) and 1. (near perfect fit). + Beware that is function is not strict on the match in order to ease the detection. (Meaning close match is 1.) + """ + if language not in FREQUENCIES: + raise ValueError("{} not available".format(language)) + + character_approved_count: int = 0 + FREQUENCIES_language_set = set(FREQUENCIES[language]) + + ordered_characters_count: int = len(ordered_characters) + target_language_characters_count: int = len(FREQUENCIES[language]) + + large_alphabet: bool = target_language_characters_count > 26 + + for character, character_rank in zip( + ordered_characters, range(0, ordered_characters_count) + ): + if character not in FREQUENCIES_language_set: + continue + + character_rank_in_language: int = FREQUENCIES[language].index(character) + expected_projection_ratio: float = ( + target_language_characters_count / ordered_characters_count + ) + character_rank_projection: int = int(character_rank * expected_projection_ratio) + + if ( + large_alphabet is False + and abs(character_rank_projection - character_rank_in_language) > 4 + ): + continue + + if ( + large_alphabet is True + and abs(character_rank_projection - character_rank_in_language) + < target_language_characters_count / 3 + ): + character_approved_count += 1 + continue + + characters_before_source: List[str] = FREQUENCIES[language][ + 0:character_rank_in_language + ] + characters_after_source: List[str] = FREQUENCIES[language][ + character_rank_in_language: + ] + characters_before: List[str] = ordered_characters[0:character_rank] + characters_after: List[str] = ordered_characters[character_rank:] + + before_match_count: int = len( + set(characters_before) & set(characters_before_source) + ) + + after_match_count: int = len( + set(characters_after) & set(characters_after_source) + ) + + if len(characters_before_source) == 0 and before_match_count <= 4: + character_approved_count += 1 + continue + + if len(characters_after_source) == 0 and after_match_count <= 4: + character_approved_count += 1 + continue + + if ( + before_match_count / len(characters_before_source) >= 0.4 + or after_match_count / len(characters_after_source) >= 0.4 + ): + character_approved_count += 1 + continue + + return character_approved_count / len(ordered_characters) + + +def alpha_unicode_split(decoded_sequence: str) -> List[str]: + """ + Given a decoded text sequence, return a list of str. Unicode range / alphabet separation. + Ex. a text containing English/Latin with a bit a Hebrew will return two items in the resulting list; + One containing the latin letters and the other hebrew. + """ + layers: Dict[str, str] = {} + + for character in decoded_sequence: + if character.isalpha() is False: + continue + + character_range: Optional[str] = unicode_range(character) + + if character_range is None: + continue + + layer_target_range: Optional[str] = None + + for discovered_range in layers: + if ( + is_suspiciously_successive_range(discovered_range, character_range) + is False + ): + layer_target_range = discovered_range + break + + if layer_target_range is None: + layer_target_range = character_range + + if layer_target_range not in layers: + layers[layer_target_range] = character.lower() + continue + + layers[layer_target_range] += character.lower() + + return list(layers.values()) + + +def merge_coherence_ratios(results: List[CoherenceMatches]) -> CoherenceMatches: + """ + This function merge results previously given by the function coherence_ratio. + The return type is the same as coherence_ratio. + """ + per_language_ratios: Dict[str, List[float]] = {} + for result in results: + for sub_result in result: + language, ratio = sub_result + if language not in per_language_ratios: + per_language_ratios[language] = [ratio] + continue + per_language_ratios[language].append(ratio) + + merge = [ + ( + language, + round( + sum(per_language_ratios[language]) / len(per_language_ratios[language]), + 4, + ), + ) + for language in per_language_ratios + ] + + return sorted(merge, key=lambda x: x[1], reverse=True) + + +def filter_alt_coherence_matches(results: CoherenceMatches) -> CoherenceMatches: + """ + We shall NOT return "English—" in CoherenceMatches because it is an alternative + of "English". This function only keeps the best match and remove the em-dash in it. + """ + index_results: Dict[str, List[float]] = dict() + + for result in results: + language, ratio = result + no_em_name: str = language.replace("—", "") + + if no_em_name not in index_results: + index_results[no_em_name] = [] + + index_results[no_em_name].append(ratio) + + if any(len(index_results[e]) > 1 for e in index_results): + filtered_results: CoherenceMatches = [] + + for language in index_results: + filtered_results.append((language, max(index_results[language]))) + + return filtered_results + + return results + + +@lru_cache(maxsize=2048) +def coherence_ratio( + decoded_sequence: str, threshold: float = 0.1, lg_inclusion: Optional[str] = None +) -> CoherenceMatches: + """ + Detect ANY language that can be identified in given sequence. The sequence will be analysed by layers. + A layer = Character extraction by alphabets/ranges. + """ + + results: List[Tuple[str, float]] = [] + ignore_non_latin: bool = False + + sufficient_match_count: int = 0 + + lg_inclusion_list = lg_inclusion.split(",") if lg_inclusion is not None else [] + if "Latin Based" in lg_inclusion_list: + ignore_non_latin = True + lg_inclusion_list.remove("Latin Based") + + for layer in alpha_unicode_split(decoded_sequence): + sequence_frequencies: TypeCounter[str] = Counter(layer) + most_common = sequence_frequencies.most_common() + + character_count: int = sum(o for c, o in most_common) + + if character_count <= TOO_SMALL_SEQUENCE: + continue + + popular_character_ordered: List[str] = [c for c, o in most_common] + + for language in lg_inclusion_list or alphabet_languages( + popular_character_ordered, ignore_non_latin + ): + ratio: float = characters_popularity_compare( + language, popular_character_ordered + ) + + if ratio < threshold: + continue + elif ratio >= 0.8: + sufficient_match_count += 1 + + results.append((language, round(ratio, 4))) + + if sufficient_match_count >= 3: + break + + return sorted( + filter_alt_coherence_matches(results), key=lambda x: x[1], reverse=True + ) diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/cli/__init__.py b/templates/skills/file_manager/dependencies/charset_normalizer/cli/__init__.py new file mode 100644 index 00000000..d95fedfe --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/cli/__init__.py @@ -0,0 +1,6 @@ +from .__main__ import cli_detect, query_yes_no + +__all__ = ( + "cli_detect", + "query_yes_no", +) diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/cli/__main__.py b/templates/skills/file_manager/dependencies/charset_normalizer/cli/__main__.py new file mode 100644 index 00000000..f4bcbaac --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/cli/__main__.py @@ -0,0 +1,296 @@ +import argparse +import sys +from json import dumps +from os.path import abspath, basename, dirname, join, realpath +from platform import python_version +from typing import List, Optional +from unicodedata import unidata_version + +import charset_normalizer.md as md_module +from charset_normalizer import from_fp +from charset_normalizer.models import CliDetectionResult +from charset_normalizer.version import __version__ + + +def query_yes_no(question: str, default: str = "yes") -> bool: + """Ask a yes/no question via input() and return their answer. + + "question" is a string that is presented to the user. + "default" is the presumed answer if the user just hits . + It must be "yes" (the default), "no" or None (meaning + an answer is required of the user). + + The "answer" return value is True for "yes" or False for "no". + + Credit goes to (c) https://stackoverflow.com/questions/3041986/apt-command-line-interface-like-yes-no-input + """ + valid = {"yes": True, "y": True, "ye": True, "no": False, "n": False} + if default is None: + prompt = " [y/n] " + elif default == "yes": + prompt = " [Y/n] " + elif default == "no": + prompt = " [y/N] " + else: + raise ValueError("invalid default answer: '%s'" % default) + + while True: + sys.stdout.write(question + prompt) + choice = input().lower() + if default is not None and choice == "": + return valid[default] + elif choice in valid: + return valid[choice] + else: + sys.stdout.write("Please respond with 'yes' or 'no' " "(or 'y' or 'n').\n") + + +def cli_detect(argv: Optional[List[str]] = None) -> int: + """ + CLI assistant using ARGV and ArgumentParser + :param argv: + :return: 0 if everything is fine, anything else equal trouble + """ + parser = argparse.ArgumentParser( + description="The Real First Universal Charset Detector. " + "Discover originating encoding used on text file. " + "Normalize text to unicode." + ) + + parser.add_argument( + "files", type=argparse.FileType("rb"), nargs="+", help="File(s) to be analysed" + ) + parser.add_argument( + "-v", + "--verbose", + action="store_true", + default=False, + dest="verbose", + help="Display complementary information about file if any. " + "Stdout will contain logs about the detection process.", + ) + parser.add_argument( + "-a", + "--with-alternative", + action="store_true", + default=False, + dest="alternatives", + help="Output complementary possibilities if any. Top-level JSON WILL be a list.", + ) + parser.add_argument( + "-n", + "--normalize", + action="store_true", + default=False, + dest="normalize", + help="Permit to normalize input file. If not set, program does not write anything.", + ) + parser.add_argument( + "-m", + "--minimal", + action="store_true", + default=False, + dest="minimal", + help="Only output the charset detected to STDOUT. Disabling JSON output.", + ) + parser.add_argument( + "-r", + "--replace", + action="store_true", + default=False, + dest="replace", + help="Replace file when trying to normalize it instead of creating a new one.", + ) + parser.add_argument( + "-f", + "--force", + action="store_true", + default=False, + dest="force", + help="Replace file without asking if you are sure, use this flag with caution.", + ) + parser.add_argument( + "-t", + "--threshold", + action="store", + default=0.2, + type=float, + dest="threshold", + help="Define a custom maximum amount of chaos allowed in decoded content. 0. <= chaos <= 1.", + ) + parser.add_argument( + "--version", + action="version", + version="Charset-Normalizer {} - Python {} - Unicode {} - SpeedUp {}".format( + __version__, + python_version(), + unidata_version, + "OFF" if md_module.__file__.lower().endswith(".py") else "ON", + ), + help="Show version information and exit.", + ) + + args = parser.parse_args(argv) + + if args.replace is True and args.normalize is False: + print("Use --replace in addition of --normalize only.", file=sys.stderr) + return 1 + + if args.force is True and args.replace is False: + print("Use --force in addition of --replace only.", file=sys.stderr) + return 1 + + if args.threshold < 0.0 or args.threshold > 1.0: + print("--threshold VALUE should be between 0. AND 1.", file=sys.stderr) + return 1 + + x_ = [] + + for my_file in args.files: + matches = from_fp(my_file, threshold=args.threshold, explain=args.verbose) + + best_guess = matches.best() + + if best_guess is None: + print( + 'Unable to identify originating encoding for "{}". {}'.format( + my_file.name, + "Maybe try increasing maximum amount of chaos." + if args.threshold < 1.0 + else "", + ), + file=sys.stderr, + ) + x_.append( + CliDetectionResult( + abspath(my_file.name), + None, + [], + [], + "Unknown", + [], + False, + 1.0, + 0.0, + None, + True, + ) + ) + else: + x_.append( + CliDetectionResult( + abspath(my_file.name), + best_guess.encoding, + best_guess.encoding_aliases, + [ + cp + for cp in best_guess.could_be_from_charset + if cp != best_guess.encoding + ], + best_guess.language, + best_guess.alphabets, + best_guess.bom, + best_guess.percent_chaos, + best_guess.percent_coherence, + None, + True, + ) + ) + + if len(matches) > 1 and args.alternatives: + for el in matches: + if el != best_guess: + x_.append( + CliDetectionResult( + abspath(my_file.name), + el.encoding, + el.encoding_aliases, + [ + cp + for cp in el.could_be_from_charset + if cp != el.encoding + ], + el.language, + el.alphabets, + el.bom, + el.percent_chaos, + el.percent_coherence, + None, + False, + ) + ) + + if args.normalize is True: + if best_guess.encoding.startswith("utf") is True: + print( + '"{}" file does not need to be normalized, as it already came from unicode.'.format( + my_file.name + ), + file=sys.stderr, + ) + if my_file.closed is False: + my_file.close() + continue + + dir_path = dirname(realpath(my_file.name)) + file_name = basename(realpath(my_file.name)) + + o_: List[str] = file_name.split(".") + + if args.replace is False: + o_.insert(-1, best_guess.encoding) + if my_file.closed is False: + my_file.close() + elif ( + args.force is False + and query_yes_no( + 'Are you sure to normalize "{}" by replacing it ?'.format( + my_file.name + ), + "no", + ) + is False + ): + if my_file.closed is False: + my_file.close() + continue + + try: + x_[0].unicode_path = join(dir_path, ".".join(o_)) + + with open(x_[0].unicode_path, "w", encoding="utf-8") as fp: + fp.write(str(best_guess)) + except IOError as e: + print(str(e), file=sys.stderr) + if my_file.closed is False: + my_file.close() + return 2 + + if my_file.closed is False: + my_file.close() + + if args.minimal is False: + print( + dumps( + [el.__dict__ for el in x_] if len(x_) > 1 else x_[0].__dict__, + ensure_ascii=True, + indent=4, + ) + ) + else: + for my_file in args.files: + print( + ", ".join( + [ + el.encoding or "undefined" + for el in x_ + if el.path == abspath(my_file.name) + ] + ) + ) + + return 0 + + +if __name__ == "__main__": + cli_detect() diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/constant.py b/templates/skills/file_manager/dependencies/charset_normalizer/constant.py new file mode 100644 index 00000000..86349046 --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/constant.py @@ -0,0 +1,1995 @@ +# -*- coding: utf-8 -*- +from codecs import BOM_UTF8, BOM_UTF16_BE, BOM_UTF16_LE, BOM_UTF32_BE, BOM_UTF32_LE +from encodings.aliases import aliases +from re import IGNORECASE, compile as re_compile +from typing import Dict, List, Set, Union + +# Contain for each eligible encoding a list of/item bytes SIG/BOM +ENCODING_MARKS: Dict[str, Union[bytes, List[bytes]]] = { + "utf_8": BOM_UTF8, + "utf_7": [ + b"\x2b\x2f\x76\x38", + b"\x2b\x2f\x76\x39", + b"\x2b\x2f\x76\x2b", + b"\x2b\x2f\x76\x2f", + b"\x2b\x2f\x76\x38\x2d", + ], + "gb18030": b"\x84\x31\x95\x33", + "utf_32": [BOM_UTF32_BE, BOM_UTF32_LE], + "utf_16": [BOM_UTF16_BE, BOM_UTF16_LE], +} + +TOO_SMALL_SEQUENCE: int = 32 +TOO_BIG_SEQUENCE: int = int(10e6) + +UTF8_MAXIMAL_ALLOCATION: int = 1_112_064 + +# Up-to-date Unicode ucd/15.0.0 +UNICODE_RANGES_COMBINED: Dict[str, range] = { + "Control character": range(32), + "Basic Latin": range(32, 128), + "Latin-1 Supplement": range(128, 256), + "Latin Extended-A": range(256, 384), + "Latin Extended-B": range(384, 592), + "IPA Extensions": range(592, 688), + "Spacing Modifier Letters": range(688, 768), + "Combining Diacritical Marks": range(768, 880), + "Greek and Coptic": range(880, 1024), + "Cyrillic": range(1024, 1280), + "Cyrillic Supplement": range(1280, 1328), + "Armenian": range(1328, 1424), + "Hebrew": range(1424, 1536), + "Arabic": range(1536, 1792), + "Syriac": range(1792, 1872), + "Arabic Supplement": range(1872, 1920), + "Thaana": range(1920, 1984), + "NKo": range(1984, 2048), + "Samaritan": range(2048, 2112), + "Mandaic": range(2112, 2144), + "Syriac Supplement": range(2144, 2160), + "Arabic Extended-B": range(2160, 2208), + "Arabic Extended-A": range(2208, 2304), + "Devanagari": range(2304, 2432), + "Bengali": range(2432, 2560), + "Gurmukhi": range(2560, 2688), + "Gujarati": range(2688, 2816), + "Oriya": range(2816, 2944), + "Tamil": range(2944, 3072), + "Telugu": range(3072, 3200), + "Kannada": range(3200, 3328), + "Malayalam": range(3328, 3456), + "Sinhala": range(3456, 3584), + "Thai": range(3584, 3712), + "Lao": range(3712, 3840), + "Tibetan": range(3840, 4096), + "Myanmar": range(4096, 4256), + "Georgian": range(4256, 4352), + "Hangul Jamo": range(4352, 4608), + "Ethiopic": range(4608, 4992), + "Ethiopic Supplement": range(4992, 5024), + "Cherokee": range(5024, 5120), + "Unified Canadian Aboriginal Syllabics": range(5120, 5760), + "Ogham": range(5760, 5792), + "Runic": range(5792, 5888), + "Tagalog": range(5888, 5920), + "Hanunoo": range(5920, 5952), + "Buhid": range(5952, 5984), + "Tagbanwa": range(5984, 6016), + "Khmer": range(6016, 6144), + "Mongolian": range(6144, 6320), + "Unified Canadian Aboriginal Syllabics Extended": range(6320, 6400), + "Limbu": range(6400, 6480), + "Tai Le": range(6480, 6528), + "New Tai Lue": range(6528, 6624), + "Khmer Symbols": range(6624, 6656), + "Buginese": range(6656, 6688), + "Tai Tham": range(6688, 6832), + "Combining Diacritical Marks Extended": range(6832, 6912), + "Balinese": range(6912, 7040), + "Sundanese": range(7040, 7104), + "Batak": range(7104, 7168), + "Lepcha": range(7168, 7248), + "Ol Chiki": range(7248, 7296), + "Cyrillic Extended-C": range(7296, 7312), + "Georgian Extended": range(7312, 7360), + "Sundanese Supplement": range(7360, 7376), + "Vedic Extensions": range(7376, 7424), + "Phonetic Extensions": range(7424, 7552), + "Phonetic Extensions Supplement": range(7552, 7616), + "Combining Diacritical Marks Supplement": range(7616, 7680), + "Latin Extended Additional": range(7680, 7936), + "Greek Extended": range(7936, 8192), + "General Punctuation": range(8192, 8304), + "Superscripts and Subscripts": range(8304, 8352), + "Currency Symbols": range(8352, 8400), + "Combining Diacritical Marks for Symbols": range(8400, 8448), + "Letterlike Symbols": range(8448, 8528), + "Number Forms": range(8528, 8592), + "Arrows": range(8592, 8704), + "Mathematical Operators": range(8704, 8960), + "Miscellaneous Technical": range(8960, 9216), + "Control Pictures": range(9216, 9280), + "Optical Character Recognition": range(9280, 9312), + "Enclosed Alphanumerics": range(9312, 9472), + "Box Drawing": range(9472, 9600), + "Block Elements": range(9600, 9632), + "Geometric Shapes": range(9632, 9728), + "Miscellaneous Symbols": range(9728, 9984), + "Dingbats": range(9984, 10176), + "Miscellaneous Mathematical Symbols-A": range(10176, 10224), + "Supplemental Arrows-A": range(10224, 10240), + "Braille Patterns": range(10240, 10496), + "Supplemental Arrows-B": range(10496, 10624), + "Miscellaneous Mathematical Symbols-B": range(10624, 10752), + "Supplemental Mathematical Operators": range(10752, 11008), + "Miscellaneous Symbols and Arrows": range(11008, 11264), + "Glagolitic": range(11264, 11360), + "Latin Extended-C": range(11360, 11392), + "Coptic": range(11392, 11520), + "Georgian Supplement": range(11520, 11568), + "Tifinagh": range(11568, 11648), + "Ethiopic Extended": range(11648, 11744), + "Cyrillic Extended-A": range(11744, 11776), + "Supplemental Punctuation": range(11776, 11904), + "CJK Radicals Supplement": range(11904, 12032), + "Kangxi Radicals": range(12032, 12256), + "Ideographic Description Characters": range(12272, 12288), + "CJK Symbols and Punctuation": range(12288, 12352), + "Hiragana": range(12352, 12448), + "Katakana": range(12448, 12544), + "Bopomofo": range(12544, 12592), + "Hangul Compatibility Jamo": range(12592, 12688), + "Kanbun": range(12688, 12704), + "Bopomofo Extended": range(12704, 12736), + "CJK Strokes": range(12736, 12784), + "Katakana Phonetic Extensions": range(12784, 12800), + "Enclosed CJK Letters and Months": range(12800, 13056), + "CJK Compatibility": range(13056, 13312), + "CJK Unified Ideographs Extension A": range(13312, 19904), + "Yijing Hexagram Symbols": range(19904, 19968), + "CJK Unified Ideographs": range(19968, 40960), + "Yi Syllables": range(40960, 42128), + "Yi Radicals": range(42128, 42192), + "Lisu": range(42192, 42240), + "Vai": range(42240, 42560), + "Cyrillic Extended-B": range(42560, 42656), + "Bamum": range(42656, 42752), + "Modifier Tone Letters": range(42752, 42784), + "Latin Extended-D": range(42784, 43008), + "Syloti Nagri": range(43008, 43056), + "Common Indic Number Forms": range(43056, 43072), + "Phags-pa": range(43072, 43136), + "Saurashtra": range(43136, 43232), + "Devanagari Extended": range(43232, 43264), + "Kayah Li": range(43264, 43312), + "Rejang": range(43312, 43360), + "Hangul Jamo Extended-A": range(43360, 43392), + "Javanese": range(43392, 43488), + "Myanmar Extended-B": range(43488, 43520), + "Cham": range(43520, 43616), + "Myanmar Extended-A": range(43616, 43648), + "Tai Viet": range(43648, 43744), + "Meetei Mayek Extensions": range(43744, 43776), + "Ethiopic Extended-A": range(43776, 43824), + "Latin Extended-E": range(43824, 43888), + "Cherokee Supplement": range(43888, 43968), + "Meetei Mayek": range(43968, 44032), + "Hangul Syllables": range(44032, 55216), + "Hangul Jamo Extended-B": range(55216, 55296), + "High Surrogates": range(55296, 56192), + "High Private Use Surrogates": range(56192, 56320), + "Low Surrogates": range(56320, 57344), + "Private Use Area": range(57344, 63744), + "CJK Compatibility Ideographs": range(63744, 64256), + "Alphabetic Presentation Forms": range(64256, 64336), + "Arabic Presentation Forms-A": range(64336, 65024), + "Variation Selectors": range(65024, 65040), + "Vertical Forms": range(65040, 65056), + "Combining Half Marks": range(65056, 65072), + "CJK Compatibility Forms": range(65072, 65104), + "Small Form Variants": range(65104, 65136), + "Arabic Presentation Forms-B": range(65136, 65280), + "Halfwidth and Fullwidth Forms": range(65280, 65520), + "Specials": range(65520, 65536), + "Linear B Syllabary": range(65536, 65664), + "Linear B Ideograms": range(65664, 65792), + "Aegean Numbers": range(65792, 65856), + "Ancient Greek Numbers": range(65856, 65936), + "Ancient Symbols": range(65936, 66000), + "Phaistos Disc": range(66000, 66048), + "Lycian": range(66176, 66208), + "Carian": range(66208, 66272), + "Coptic Epact Numbers": range(66272, 66304), + "Old Italic": range(66304, 66352), + "Gothic": range(66352, 66384), + "Old Permic": range(66384, 66432), + "Ugaritic": range(66432, 66464), + "Old Persian": range(66464, 66528), + "Deseret": range(66560, 66640), + "Shavian": range(66640, 66688), + "Osmanya": range(66688, 66736), + "Osage": range(66736, 66816), + "Elbasan": range(66816, 66864), + "Caucasian Albanian": range(66864, 66928), + "Vithkuqi": range(66928, 67008), + "Linear A": range(67072, 67456), + "Latin Extended-F": range(67456, 67520), + "Cypriot Syllabary": range(67584, 67648), + "Imperial Aramaic": range(67648, 67680), + "Palmyrene": range(67680, 67712), + "Nabataean": range(67712, 67760), + "Hatran": range(67808, 67840), + "Phoenician": range(67840, 67872), + "Lydian": range(67872, 67904), + "Meroitic Hieroglyphs": range(67968, 68000), + "Meroitic Cursive": range(68000, 68096), + "Kharoshthi": range(68096, 68192), + "Old South Arabian": range(68192, 68224), + "Old North Arabian": range(68224, 68256), + "Manichaean": range(68288, 68352), + "Avestan": range(68352, 68416), + "Inscriptional Parthian": range(68416, 68448), + "Inscriptional Pahlavi": range(68448, 68480), + "Psalter Pahlavi": range(68480, 68528), + "Old Turkic": range(68608, 68688), + "Old Hungarian": range(68736, 68864), + "Hanifi Rohingya": range(68864, 68928), + "Rumi Numeral Symbols": range(69216, 69248), + "Yezidi": range(69248, 69312), + "Arabic Extended-C": range(69312, 69376), + "Old Sogdian": range(69376, 69424), + "Sogdian": range(69424, 69488), + "Old Uyghur": range(69488, 69552), + "Chorasmian": range(69552, 69600), + "Elymaic": range(69600, 69632), + "Brahmi": range(69632, 69760), + "Kaithi": range(69760, 69840), + "Sora Sompeng": range(69840, 69888), + "Chakma": range(69888, 69968), + "Mahajani": range(69968, 70016), + "Sharada": range(70016, 70112), + "Sinhala Archaic Numbers": range(70112, 70144), + "Khojki": range(70144, 70224), + "Multani": range(70272, 70320), + "Khudawadi": range(70320, 70400), + "Grantha": range(70400, 70528), + "Newa": range(70656, 70784), + "Tirhuta": range(70784, 70880), + "Siddham": range(71040, 71168), + "Modi": range(71168, 71264), + "Mongolian Supplement": range(71264, 71296), + "Takri": range(71296, 71376), + "Ahom": range(71424, 71504), + "Dogra": range(71680, 71760), + "Warang Citi": range(71840, 71936), + "Dives Akuru": range(71936, 72032), + "Nandinagari": range(72096, 72192), + "Zanabazar Square": range(72192, 72272), + "Soyombo": range(72272, 72368), + "Unified Canadian Aboriginal Syllabics Extended-A": range(72368, 72384), + "Pau Cin Hau": range(72384, 72448), + "Devanagari Extended-A": range(72448, 72544), + "Bhaiksuki": range(72704, 72816), + "Marchen": range(72816, 72896), + "Masaram Gondi": range(72960, 73056), + "Gunjala Gondi": range(73056, 73136), + "Makasar": range(73440, 73472), + "Kawi": range(73472, 73568), + "Lisu Supplement": range(73648, 73664), + "Tamil Supplement": range(73664, 73728), + "Cuneiform": range(73728, 74752), + "Cuneiform Numbers and Punctuation": range(74752, 74880), + "Early Dynastic Cuneiform": range(74880, 75088), + "Cypro-Minoan": range(77712, 77824), + "Egyptian Hieroglyphs": range(77824, 78896), + "Egyptian Hieroglyph Format Controls": range(78896, 78944), + "Anatolian Hieroglyphs": range(82944, 83584), + "Bamum Supplement": range(92160, 92736), + "Mro": range(92736, 92784), + "Tangsa": range(92784, 92880), + "Bassa Vah": range(92880, 92928), + "Pahawh Hmong": range(92928, 93072), + "Medefaidrin": range(93760, 93856), + "Miao": range(93952, 94112), + "Ideographic Symbols and Punctuation": range(94176, 94208), + "Tangut": range(94208, 100352), + "Tangut Components": range(100352, 101120), + "Khitan Small Script": range(101120, 101632), + "Tangut Supplement": range(101632, 101760), + "Kana Extended-B": range(110576, 110592), + "Kana Supplement": range(110592, 110848), + "Kana Extended-A": range(110848, 110896), + "Small Kana Extension": range(110896, 110960), + "Nushu": range(110960, 111360), + "Duployan": range(113664, 113824), + "Shorthand Format Controls": range(113824, 113840), + "Znamenny Musical Notation": range(118528, 118736), + "Byzantine Musical Symbols": range(118784, 119040), + "Musical Symbols": range(119040, 119296), + "Ancient Greek Musical Notation": range(119296, 119376), + "Kaktovik Numerals": range(119488, 119520), + "Mayan Numerals": range(119520, 119552), + "Tai Xuan Jing Symbols": range(119552, 119648), + "Counting Rod Numerals": range(119648, 119680), + "Mathematical Alphanumeric Symbols": range(119808, 120832), + "Sutton SignWriting": range(120832, 121520), + "Latin Extended-G": range(122624, 122880), + "Glagolitic Supplement": range(122880, 122928), + "Cyrillic Extended-D": range(122928, 123024), + "Nyiakeng Puachue Hmong": range(123136, 123216), + "Toto": range(123536, 123584), + "Wancho": range(123584, 123648), + "Nag Mundari": range(124112, 124160), + "Ethiopic Extended-B": range(124896, 124928), + "Mende Kikakui": range(124928, 125152), + "Adlam": range(125184, 125280), + "Indic Siyaq Numbers": range(126064, 126144), + "Ottoman Siyaq Numbers": range(126208, 126288), + "Arabic Mathematical Alphabetic Symbols": range(126464, 126720), + "Mahjong Tiles": range(126976, 127024), + "Domino Tiles": range(127024, 127136), + "Playing Cards": range(127136, 127232), + "Enclosed Alphanumeric Supplement": range(127232, 127488), + "Enclosed Ideographic Supplement": range(127488, 127744), + "Miscellaneous Symbols and Pictographs": range(127744, 128512), + "Emoticons range(Emoji)": range(128512, 128592), + "Ornamental Dingbats": range(128592, 128640), + "Transport and Map Symbols": range(128640, 128768), + "Alchemical Symbols": range(128768, 128896), + "Geometric Shapes Extended": range(128896, 129024), + "Supplemental Arrows-C": range(129024, 129280), + "Supplemental Symbols and Pictographs": range(129280, 129536), + "Chess Symbols": range(129536, 129648), + "Symbols and Pictographs Extended-A": range(129648, 129792), + "Symbols for Legacy Computing": range(129792, 130048), + "CJK Unified Ideographs Extension B": range(131072, 173792), + "CJK Unified Ideographs Extension C": range(173824, 177984), + "CJK Unified Ideographs Extension D": range(177984, 178208), + "CJK Unified Ideographs Extension E": range(178208, 183984), + "CJK Unified Ideographs Extension F": range(183984, 191472), + "CJK Compatibility Ideographs Supplement": range(194560, 195104), + "CJK Unified Ideographs Extension G": range(196608, 201552), + "CJK Unified Ideographs Extension H": range(201552, 205744), + "Tags": range(917504, 917632), + "Variation Selectors Supplement": range(917760, 918000), + "Supplementary Private Use Area-A": range(983040, 1048576), + "Supplementary Private Use Area-B": range(1048576, 1114112), +} + + +UNICODE_SECONDARY_RANGE_KEYWORD: List[str] = [ + "Supplement", + "Extended", + "Extensions", + "Modifier", + "Marks", + "Punctuation", + "Symbols", + "Forms", + "Operators", + "Miscellaneous", + "Drawing", + "Block", + "Shapes", + "Supplemental", + "Tags", +] + +RE_POSSIBLE_ENCODING_INDICATION = re_compile( + r"(?:(?:encoding)|(?:charset)|(?:coding))(?:[\:= ]{1,10})(?:[\"\']?)([a-zA-Z0-9\-_]+)(?:[\"\']?)", + IGNORECASE, +) + +IANA_NO_ALIASES = [ + "cp720", + "cp737", + "cp856", + "cp874", + "cp875", + "cp1006", + "koi8_r", + "koi8_t", + "koi8_u", +] + +IANA_SUPPORTED: List[str] = sorted( + filter( + lambda x: x.endswith("_codec") is False + and x not in {"rot_13", "tactis", "mbcs"}, + list(set(aliases.values())) + IANA_NO_ALIASES, + ) +) + +IANA_SUPPORTED_COUNT: int = len(IANA_SUPPORTED) + +# pre-computed code page that are similar using the function cp_similarity. +IANA_SUPPORTED_SIMILAR: Dict[str, List[str]] = { + "cp037": ["cp1026", "cp1140", "cp273", "cp500"], + "cp1026": ["cp037", "cp1140", "cp273", "cp500"], + "cp1125": ["cp866"], + "cp1140": ["cp037", "cp1026", "cp273", "cp500"], + "cp1250": ["iso8859_2"], + "cp1251": ["kz1048", "ptcp154"], + "cp1252": ["iso8859_15", "iso8859_9", "latin_1"], + "cp1253": ["iso8859_7"], + "cp1254": ["iso8859_15", "iso8859_9", "latin_1"], + "cp1257": ["iso8859_13"], + "cp273": ["cp037", "cp1026", "cp1140", "cp500"], + "cp437": ["cp850", "cp858", "cp860", "cp861", "cp862", "cp863", "cp865"], + "cp500": ["cp037", "cp1026", "cp1140", "cp273"], + "cp850": ["cp437", "cp857", "cp858", "cp865"], + "cp857": ["cp850", "cp858", "cp865"], + "cp858": ["cp437", "cp850", "cp857", "cp865"], + "cp860": ["cp437", "cp861", "cp862", "cp863", "cp865"], + "cp861": ["cp437", "cp860", "cp862", "cp863", "cp865"], + "cp862": ["cp437", "cp860", "cp861", "cp863", "cp865"], + "cp863": ["cp437", "cp860", "cp861", "cp862", "cp865"], + "cp865": ["cp437", "cp850", "cp857", "cp858", "cp860", "cp861", "cp862", "cp863"], + "cp866": ["cp1125"], + "iso8859_10": ["iso8859_14", "iso8859_15", "iso8859_4", "iso8859_9", "latin_1"], + "iso8859_11": ["tis_620"], + "iso8859_13": ["cp1257"], + "iso8859_14": [ + "iso8859_10", + "iso8859_15", + "iso8859_16", + "iso8859_3", + "iso8859_9", + "latin_1", + ], + "iso8859_15": [ + "cp1252", + "cp1254", + "iso8859_10", + "iso8859_14", + "iso8859_16", + "iso8859_3", + "iso8859_9", + "latin_1", + ], + "iso8859_16": [ + "iso8859_14", + "iso8859_15", + "iso8859_2", + "iso8859_3", + "iso8859_9", + "latin_1", + ], + "iso8859_2": ["cp1250", "iso8859_16", "iso8859_4"], + "iso8859_3": ["iso8859_14", "iso8859_15", "iso8859_16", "iso8859_9", "latin_1"], + "iso8859_4": ["iso8859_10", "iso8859_2", "iso8859_9", "latin_1"], + "iso8859_7": ["cp1253"], + "iso8859_9": [ + "cp1252", + "cp1254", + "cp1258", + "iso8859_10", + "iso8859_14", + "iso8859_15", + "iso8859_16", + "iso8859_3", + "iso8859_4", + "latin_1", + ], + "kz1048": ["cp1251", "ptcp154"], + "latin_1": [ + "cp1252", + "cp1254", + "cp1258", + "iso8859_10", + "iso8859_14", + "iso8859_15", + "iso8859_16", + "iso8859_3", + "iso8859_4", + "iso8859_9", + ], + "mac_iceland": ["mac_roman", "mac_turkish"], + "mac_roman": ["mac_iceland", "mac_turkish"], + "mac_turkish": ["mac_iceland", "mac_roman"], + "ptcp154": ["cp1251", "kz1048"], + "tis_620": ["iso8859_11"], +} + + +CHARDET_CORRESPONDENCE: Dict[str, str] = { + "iso2022_kr": "ISO-2022-KR", + "iso2022_jp": "ISO-2022-JP", + "euc_kr": "EUC-KR", + "tis_620": "TIS-620", + "utf_32": "UTF-32", + "euc_jp": "EUC-JP", + "koi8_r": "KOI8-R", + "iso8859_1": "ISO-8859-1", + "iso8859_2": "ISO-8859-2", + "iso8859_5": "ISO-8859-5", + "iso8859_6": "ISO-8859-6", + "iso8859_7": "ISO-8859-7", + "iso8859_8": "ISO-8859-8", + "utf_16": "UTF-16", + "cp855": "IBM855", + "mac_cyrillic": "MacCyrillic", + "gb2312": "GB2312", + "gb18030": "GB18030", + "cp932": "CP932", + "cp866": "IBM866", + "utf_8": "utf-8", + "utf_8_sig": "UTF-8-SIG", + "shift_jis": "SHIFT_JIS", + "big5": "Big5", + "cp1250": "windows-1250", + "cp1251": "windows-1251", + "cp1252": "Windows-1252", + "cp1253": "windows-1253", + "cp1255": "windows-1255", + "cp1256": "windows-1256", + "cp1254": "Windows-1254", + "cp949": "CP949", +} + + +COMMON_SAFE_ASCII_CHARACTERS: Set[str] = { + "<", + ">", + "=", + ":", + "/", + "&", + ";", + "{", + "}", + "[", + "]", + ",", + "|", + '"', + "-", +} + + +KO_NAMES: Set[str] = {"johab", "cp949", "euc_kr"} +ZH_NAMES: Set[str] = {"big5", "cp950", "big5hkscs", "hz"} + +# Logging LEVEL below DEBUG +TRACE: int = 5 + + +# Language label that contain the em dash "—" +# character are to be considered alternative seq to origin +FREQUENCIES: Dict[str, List[str]] = { + "English": [ + "e", + "a", + "t", + "i", + "o", + "n", + "s", + "r", + "h", + "l", + "d", + "c", + "u", + "m", + "f", + "p", + "g", + "w", + "y", + "b", + "v", + "k", + "x", + "j", + "z", + "q", + ], + "English—": [ + "e", + "a", + "t", + "i", + "o", + "n", + "s", + "r", + "h", + "l", + "d", + "c", + "m", + "u", + "f", + "p", + "g", + "w", + "b", + "y", + "v", + "k", + "j", + "x", + "z", + "q", + ], + "German": [ + "e", + "n", + "i", + "r", + "s", + "t", + "a", + "d", + "h", + "u", + "l", + "g", + "o", + "c", + "m", + "b", + "f", + "k", + "w", + "z", + "p", + "v", + "ü", + "ä", + "ö", + "j", + ], + "French": [ + "e", + "a", + "s", + "n", + "i", + "t", + "r", + "l", + "u", + "o", + "d", + "c", + "p", + "m", + "é", + "v", + "g", + "f", + "b", + "h", + "q", + "à", + "x", + "è", + "y", + "j", + ], + "Dutch": [ + "e", + "n", + "a", + "i", + "r", + "t", + "o", + "d", + "s", + "l", + "g", + "h", + "v", + "m", + "u", + "k", + "c", + "p", + "b", + "w", + "j", + "z", + "f", + "y", + "x", + "ë", + ], + "Italian": [ + "e", + "i", + "a", + "o", + "n", + "l", + "t", + "r", + "s", + "c", + "d", + "u", + "p", + "m", + "g", + "v", + "f", + "b", + "z", + "h", + "q", + "è", + "à", + "k", + "y", + "ò", + ], + "Polish": [ + "a", + "i", + "o", + "e", + "n", + "r", + "z", + "w", + "s", + "c", + "t", + "k", + "y", + "d", + "p", + "m", + "u", + "l", + "j", + "ł", + "g", + "b", + "h", + "ą", + "ę", + "ó", + ], + "Spanish": [ + "e", + "a", + "o", + "n", + "s", + "r", + "i", + "l", + "d", + "t", + "c", + "u", + "m", + "p", + "b", + "g", + "v", + "f", + "y", + "ó", + "h", + "q", + "í", + "j", + "z", + "á", + ], + "Russian": [ + "о", + "а", + "е", + "и", + "н", + "с", + "т", + "р", + "в", + "л", + "к", + "м", + "д", + "п", + "у", + "г", + "я", + "ы", + "з", + "б", + "й", + "ь", + "ч", + "х", + "ж", + "ц", + ], + # Jap-Kanji + "Japanese": [ + "人", + "一", + "大", + "亅", + "丁", + "丨", + "竹", + "笑", + "口", + "日", + "今", + "二", + "彳", + "行", + "十", + "土", + "丶", + "寸", + "寺", + "時", + "乙", + "丿", + "乂", + "气", + "気", + "冂", + "巾", + "亠", + "市", + "目", + "儿", + "見", + "八", + "小", + "凵", + "県", + "月", + "彐", + "門", + "間", + "木", + "東", + "山", + "出", + "本", + "中", + "刀", + "分", + "耳", + "又", + "取", + "最", + "言", + "田", + "心", + "思", + "刂", + "前", + "京", + "尹", + "事", + "生", + "厶", + "云", + "会", + "未", + "来", + "白", + "冫", + "楽", + "灬", + "馬", + "尸", + "尺", + "駅", + "明", + "耂", + "者", + "了", + "阝", + "都", + "高", + "卜", + "占", + "厂", + "广", + "店", + "子", + "申", + "奄", + "亻", + "俺", + "上", + "方", + "冖", + "学", + "衣", + "艮", + "食", + "自", + ], + # Jap-Katakana + "Japanese—": [ + "ー", + "ン", + "ス", + "・", + "ル", + "ト", + "リ", + "イ", + "ア", + "ラ", + "ッ", + "ク", + "ド", + "シ", + "レ", + "ジ", + "タ", + "フ", + "ロ", + "カ", + "テ", + "マ", + "ィ", + "グ", + "バ", + "ム", + "プ", + "オ", + "コ", + "デ", + "ニ", + "ウ", + "メ", + "サ", + "ビ", + "ナ", + "ブ", + "ャ", + "エ", + "ュ", + "チ", + "キ", + "ズ", + "ダ", + "パ", + "ミ", + "ェ", + "ョ", + "ハ", + "セ", + "ベ", + "ガ", + "モ", + "ツ", + "ネ", + "ボ", + "ソ", + "ノ", + "ァ", + "ヴ", + "ワ", + "ポ", + "ペ", + "ピ", + "ケ", + "ゴ", + "ギ", + "ザ", + "ホ", + "ゲ", + "ォ", + "ヤ", + "ヒ", + "ユ", + "ヨ", + "ヘ", + "ゼ", + "ヌ", + "ゥ", + "ゾ", + "ヶ", + "ヂ", + "ヲ", + "ヅ", + "ヵ", + "ヱ", + "ヰ", + "ヮ", + "ヽ", + "゠", + "ヾ", + "ヷ", + "ヿ", + "ヸ", + "ヹ", + "ヺ", + ], + # Jap-Hiragana + "Japanese——": [ + "の", + "に", + "る", + "た", + "と", + "は", + "し", + "い", + "を", + "で", + "て", + "が", + "な", + "れ", + "か", + "ら", + "さ", + "っ", + "り", + "す", + "あ", + "も", + "こ", + "ま", + "う", + "く", + "よ", + "き", + "ん", + "め", + "お", + "け", + "そ", + "つ", + "だ", + "や", + "え", + "ど", + "わ", + "ち", + "み", + "せ", + "じ", + "ば", + "へ", + "び", + "ず", + "ろ", + "ほ", + "げ", + "む", + "べ", + "ひ", + "ょ", + "ゆ", + "ぶ", + "ご", + "ゃ", + "ね", + "ふ", + "ぐ", + "ぎ", + "ぼ", + "ゅ", + "づ", + "ざ", + "ぞ", + "ぬ", + "ぜ", + "ぱ", + "ぽ", + "ぷ", + "ぴ", + "ぃ", + "ぁ", + "ぇ", + "ぺ", + "ゞ", + "ぢ", + "ぉ", + "ぅ", + "ゐ", + "ゝ", + "ゑ", + "゛", + "゜", + "ゎ", + "ゔ", + "゚", + "ゟ", + "゙", + "ゕ", + "ゖ", + ], + "Portuguese": [ + "a", + "e", + "o", + "s", + "i", + "r", + "d", + "n", + "t", + "m", + "u", + "c", + "l", + "p", + "g", + "v", + "b", + "f", + "h", + "ã", + "q", + "é", + "ç", + "á", + "z", + "í", + ], + "Swedish": [ + "e", + "a", + "n", + "r", + "t", + "s", + "i", + "l", + "d", + "o", + "m", + "k", + "g", + "v", + "h", + "f", + "u", + "p", + "ä", + "c", + "b", + "ö", + "å", + "y", + "j", + "x", + ], + "Chinese": [ + "的", + "一", + "是", + "不", + "了", + "在", + "人", + "有", + "我", + "他", + "这", + "个", + "们", + "中", + "来", + "上", + "大", + "为", + "和", + "国", + "地", + "到", + "以", + "说", + "时", + "要", + "就", + "出", + "会", + "可", + "也", + "你", + "对", + "生", + "能", + "而", + "子", + "那", + "得", + "于", + "着", + "下", + "自", + "之", + "年", + "过", + "发", + "后", + "作", + "里", + "用", + "道", + "行", + "所", + "然", + "家", + "种", + "事", + "成", + "方", + "多", + "经", + "么", + "去", + "法", + "学", + "如", + "都", + "同", + "现", + "当", + "没", + "动", + "面", + "起", + "看", + "定", + "天", + "分", + "还", + "进", + "好", + "小", + "部", + "其", + "些", + "主", + "样", + "理", + "心", + "她", + "本", + "前", + "开", + "但", + "因", + "只", + "从", + "想", + "实", + ], + "Ukrainian": [ + "о", + "а", + "н", + "і", + "и", + "р", + "в", + "т", + "е", + "с", + "к", + "л", + "у", + "д", + "м", + "п", + "з", + "я", + "ь", + "б", + "г", + "й", + "ч", + "х", + "ц", + "ї", + ], + "Norwegian": [ + "e", + "r", + "n", + "t", + "a", + "s", + "i", + "o", + "l", + "d", + "g", + "k", + "m", + "v", + "f", + "p", + "u", + "b", + "h", + "å", + "y", + "j", + "ø", + "c", + "æ", + "w", + ], + "Finnish": [ + "a", + "i", + "n", + "t", + "e", + "s", + "l", + "o", + "u", + "k", + "ä", + "m", + "r", + "v", + "j", + "h", + "p", + "y", + "d", + "ö", + "g", + "c", + "b", + "f", + "w", + "z", + ], + "Vietnamese": [ + "n", + "h", + "t", + "i", + "c", + "g", + "a", + "o", + "u", + "m", + "l", + "r", + "à", + "đ", + "s", + "e", + "v", + "p", + "b", + "y", + "ư", + "d", + "á", + "k", + "ộ", + "ế", + ], + "Czech": [ + "o", + "e", + "a", + "n", + "t", + "s", + "i", + "l", + "v", + "r", + "k", + "d", + "u", + "m", + "p", + "í", + "c", + "h", + "z", + "á", + "y", + "j", + "b", + "ě", + "é", + "ř", + ], + "Hungarian": [ + "e", + "a", + "t", + "l", + "s", + "n", + "k", + "r", + "i", + "o", + "z", + "á", + "é", + "g", + "m", + "b", + "y", + "v", + "d", + "h", + "u", + "p", + "j", + "ö", + "f", + "c", + ], + "Korean": [ + "이", + "다", + "에", + "의", + "는", + "로", + "하", + "을", + "가", + "고", + "지", + "서", + "한", + "은", + "기", + "으", + "년", + "대", + "사", + "시", + "를", + "리", + "도", + "인", + "스", + "일", + ], + "Indonesian": [ + "a", + "n", + "e", + "i", + "r", + "t", + "u", + "s", + "d", + "k", + "m", + "l", + "g", + "p", + "b", + "o", + "h", + "y", + "j", + "c", + "w", + "f", + "v", + "z", + "x", + "q", + ], + "Turkish": [ + "a", + "e", + "i", + "n", + "r", + "l", + "ı", + "k", + "d", + "t", + "s", + "m", + "y", + "u", + "o", + "b", + "ü", + "ş", + "v", + "g", + "z", + "h", + "c", + "p", + "ç", + "ğ", + ], + "Romanian": [ + "e", + "i", + "a", + "r", + "n", + "t", + "u", + "l", + "o", + "c", + "s", + "d", + "p", + "m", + "ă", + "f", + "v", + "î", + "g", + "b", + "ș", + "ț", + "z", + "h", + "â", + "j", + ], + "Farsi": [ + "ا", + "ی", + "ر", + "د", + "ن", + "ه", + "و", + "م", + "ت", + "ب", + "س", + "ل", + "ک", + "ش", + "ز", + "ف", + "گ", + "ع", + "خ", + "ق", + "ج", + "آ", + "پ", + "ح", + "ط", + "ص", + ], + "Arabic": [ + "ا", + "ل", + "ي", + "م", + "و", + "ن", + "ر", + "ت", + "ب", + "ة", + "ع", + "د", + "س", + "ف", + "ه", + "ك", + "ق", + "أ", + "ح", + "ج", + "ش", + "ط", + "ص", + "ى", + "خ", + "إ", + ], + "Danish": [ + "e", + "r", + "n", + "t", + "a", + "i", + "s", + "d", + "l", + "o", + "g", + "m", + "k", + "f", + "v", + "u", + "b", + "h", + "p", + "å", + "y", + "ø", + "æ", + "c", + "j", + "w", + ], + "Serbian": [ + "а", + "и", + "о", + "е", + "н", + "р", + "с", + "у", + "т", + "к", + "ј", + "в", + "д", + "м", + "п", + "л", + "г", + "з", + "б", + "a", + "i", + "e", + "o", + "n", + "ц", + "ш", + ], + "Lithuanian": [ + "i", + "a", + "s", + "o", + "r", + "e", + "t", + "n", + "u", + "k", + "m", + "l", + "p", + "v", + "d", + "j", + "g", + "ė", + "b", + "y", + "ų", + "š", + "ž", + "c", + "ą", + "į", + ], + "Slovene": [ + "e", + "a", + "i", + "o", + "n", + "r", + "s", + "l", + "t", + "j", + "v", + "k", + "d", + "p", + "m", + "u", + "z", + "b", + "g", + "h", + "č", + "c", + "š", + "ž", + "f", + "y", + ], + "Slovak": [ + "o", + "a", + "e", + "n", + "i", + "r", + "v", + "t", + "s", + "l", + "k", + "d", + "m", + "p", + "u", + "c", + "h", + "j", + "b", + "z", + "á", + "y", + "ý", + "í", + "č", + "é", + ], + "Hebrew": [ + "י", + "ו", + "ה", + "ל", + "ר", + "ב", + "ת", + "מ", + "א", + "ש", + "נ", + "ע", + "ם", + "ד", + "ק", + "ח", + "פ", + "ס", + "כ", + "ג", + "ט", + "צ", + "ן", + "ז", + "ך", + ], + "Bulgarian": [ + "а", + "и", + "о", + "е", + "н", + "т", + "р", + "с", + "в", + "л", + "к", + "д", + "п", + "м", + "з", + "г", + "я", + "ъ", + "у", + "б", + "ч", + "ц", + "й", + "ж", + "щ", + "х", + ], + "Croatian": [ + "a", + "i", + "o", + "e", + "n", + "r", + "j", + "s", + "t", + "u", + "k", + "l", + "v", + "d", + "m", + "p", + "g", + "z", + "b", + "c", + "č", + "h", + "š", + "ž", + "ć", + "f", + ], + "Hindi": [ + "क", + "र", + "स", + "न", + "त", + "म", + "ह", + "प", + "य", + "ल", + "व", + "ज", + "द", + "ग", + "ब", + "श", + "ट", + "अ", + "ए", + "थ", + "भ", + "ड", + "च", + "ध", + "ष", + "इ", + ], + "Estonian": [ + "a", + "i", + "e", + "s", + "t", + "l", + "u", + "n", + "o", + "k", + "r", + "d", + "m", + "v", + "g", + "p", + "j", + "h", + "ä", + "b", + "õ", + "ü", + "f", + "c", + "ö", + "y", + ], + "Thai": [ + "า", + "น", + "ร", + "อ", + "ก", + "เ", + "ง", + "ม", + "ย", + "ล", + "ว", + "ด", + "ท", + "ส", + "ต", + "ะ", + "ป", + "บ", + "ค", + "ห", + "แ", + "จ", + "พ", + "ช", + "ข", + "ใ", + ], + "Greek": [ + "α", + "τ", + "ο", + "ι", + "ε", + "ν", + "ρ", + "σ", + "κ", + "η", + "π", + "ς", + "υ", + "μ", + "λ", + "ί", + "ό", + "ά", + "γ", + "έ", + "δ", + "ή", + "ω", + "χ", + "θ", + "ύ", + ], + "Tamil": [ + "க", + "த", + "ப", + "ட", + "ர", + "ம", + "ல", + "ன", + "வ", + "ற", + "ய", + "ள", + "ச", + "ந", + "இ", + "ண", + "அ", + "ஆ", + "ழ", + "ங", + "எ", + "உ", + "ஒ", + "ஸ", + ], + "Kazakh": [ + "а", + "ы", + "е", + "н", + "т", + "р", + "л", + "і", + "д", + "с", + "м", + "қ", + "к", + "о", + "б", + "и", + "у", + "ғ", + "ж", + "ң", + "з", + "ш", + "й", + "п", + "г", + "ө", + ], +} + +LANGUAGE_SUPPORTED_COUNT: int = len(FREQUENCIES) diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/legacy.py b/templates/skills/file_manager/dependencies/charset_normalizer/legacy.py new file mode 100644 index 00000000..43aad21a --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/legacy.py @@ -0,0 +1,54 @@ +from typing import Any, Dict, Optional, Union +from warnings import warn + +from .api import from_bytes +from .constant import CHARDET_CORRESPONDENCE + + +def detect( + byte_str: bytes, should_rename_legacy: bool = False, **kwargs: Any +) -> Dict[str, Optional[Union[str, float]]]: + """ + chardet legacy method + Detect the encoding of the given byte string. It should be mostly backward-compatible. + Encoding name will match Chardet own writing whenever possible. (Not on encoding name unsupported by it) + This function is deprecated and should be used to migrate your project easily, consult the documentation for + further information. Not planned for removal. + + :param byte_str: The byte sequence to examine. + :param should_rename_legacy: Should we rename legacy encodings + to their more modern equivalents? + """ + if len(kwargs): + warn( + f"charset-normalizer disregard arguments '{','.join(list(kwargs.keys()))}' in legacy function detect()" + ) + + if not isinstance(byte_str, (bytearray, bytes)): + raise TypeError( # pragma: nocover + "Expected object of type bytes or bytearray, got: " + "{0}".format(type(byte_str)) + ) + + if isinstance(byte_str, bytearray): + byte_str = bytes(byte_str) + + r = from_bytes(byte_str).best() + + encoding = r.encoding if r is not None else None + language = r.language if r is not None and r.language != "Unknown" else "" + confidence = 1.0 - r.chaos if r is not None else None + + # Note: CharsetNormalizer does not return 'UTF-8-SIG' as the sig get stripped in the detection/normalization process + # but chardet does return 'utf-8-sig' and it is a valid codec name. + if r is not None and encoding == "utf_8" and r.bom: + encoding += "_sig" + + if should_rename_legacy is False and encoding in CHARDET_CORRESPONDENCE: + encoding = CHARDET_CORRESPONDENCE[encoding] + + return { + "encoding": encoding, + "language": language, + "confidence": confidence, + } diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/md.cp311-win_amd64.pyd b/templates/skills/file_manager/dependencies/charset_normalizer/md.cp311-win_amd64.pyd new file mode 100644 index 00000000..a23541bf Binary files /dev/null and b/templates/skills/file_manager/dependencies/charset_normalizer/md.cp311-win_amd64.pyd differ diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/md.py b/templates/skills/file_manager/dependencies/charset_normalizer/md.py new file mode 100644 index 00000000..77897aae --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/md.py @@ -0,0 +1,615 @@ +from functools import lru_cache +from logging import getLogger +from typing import List, Optional + +from .constant import ( + COMMON_SAFE_ASCII_CHARACTERS, + TRACE, + UNICODE_SECONDARY_RANGE_KEYWORD, +) +from .utils import ( + is_accentuated, + is_arabic, + is_arabic_isolated_form, + is_case_variable, + is_cjk, + is_emoticon, + is_hangul, + is_hiragana, + is_katakana, + is_latin, + is_punctuation, + is_separator, + is_symbol, + is_thai, + is_unprintable, + remove_accent, + unicode_range, +) + + +class MessDetectorPlugin: + """ + Base abstract class used for mess detection plugins. + All detectors MUST extend and implement given methods. + """ + + def eligible(self, character: str) -> bool: + """ + Determine if given character should be fed in. + """ + raise NotImplementedError # pragma: nocover + + def feed(self, character: str) -> None: + """ + The main routine to be executed upon character. + Insert the logic in witch the text would be considered chaotic. + """ + raise NotImplementedError # pragma: nocover + + def reset(self) -> None: # pragma: no cover + """ + Permit to reset the plugin to the initial state. + """ + raise NotImplementedError + + @property + def ratio(self) -> float: + """ + Compute the chaos ratio based on what your feed() has seen. + Must NOT be lower than 0.; No restriction gt 0. + """ + raise NotImplementedError # pragma: nocover + + +class TooManySymbolOrPunctuationPlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._punctuation_count: int = 0 + self._symbol_count: int = 0 + self._character_count: int = 0 + + self._last_printable_char: Optional[str] = None + self._frenzy_symbol_in_word: bool = False + + def eligible(self, character: str) -> bool: + return character.isprintable() + + def feed(self, character: str) -> None: + self._character_count += 1 + + if ( + character != self._last_printable_char + and character not in COMMON_SAFE_ASCII_CHARACTERS + ): + if is_punctuation(character): + self._punctuation_count += 1 + elif ( + character.isdigit() is False + and is_symbol(character) + and is_emoticon(character) is False + ): + self._symbol_count += 2 + + self._last_printable_char = character + + def reset(self) -> None: # pragma: no cover + self._punctuation_count = 0 + self._character_count = 0 + self._symbol_count = 0 + + @property + def ratio(self) -> float: + if self._character_count == 0: + return 0.0 + + ratio_of_punctuation: float = ( + self._punctuation_count + self._symbol_count + ) / self._character_count + + return ratio_of_punctuation if ratio_of_punctuation >= 0.3 else 0.0 + + +class TooManyAccentuatedPlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._character_count: int = 0 + self._accentuated_count: int = 0 + + def eligible(self, character: str) -> bool: + return character.isalpha() + + def feed(self, character: str) -> None: + self._character_count += 1 + + if is_accentuated(character): + self._accentuated_count += 1 + + def reset(self) -> None: # pragma: no cover + self._character_count = 0 + self._accentuated_count = 0 + + @property + def ratio(self) -> float: + if self._character_count < 8: + return 0.0 + + ratio_of_accentuation: float = self._accentuated_count / self._character_count + return ratio_of_accentuation if ratio_of_accentuation >= 0.35 else 0.0 + + +class UnprintablePlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._unprintable_count: int = 0 + self._character_count: int = 0 + + def eligible(self, character: str) -> bool: + return True + + def feed(self, character: str) -> None: + if is_unprintable(character): + self._unprintable_count += 1 + self._character_count += 1 + + def reset(self) -> None: # pragma: no cover + self._unprintable_count = 0 + + @property + def ratio(self) -> float: + if self._character_count == 0: + return 0.0 + + return (self._unprintable_count * 8) / self._character_count + + +class SuspiciousDuplicateAccentPlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._successive_count: int = 0 + self._character_count: int = 0 + + self._last_latin_character: Optional[str] = None + + def eligible(self, character: str) -> bool: + return character.isalpha() and is_latin(character) + + def feed(self, character: str) -> None: + self._character_count += 1 + if ( + self._last_latin_character is not None + and is_accentuated(character) + and is_accentuated(self._last_latin_character) + ): + if character.isupper() and self._last_latin_character.isupper(): + self._successive_count += 1 + # Worse if its the same char duplicated with different accent. + if remove_accent(character) == remove_accent(self._last_latin_character): + self._successive_count += 1 + self._last_latin_character = character + + def reset(self) -> None: # pragma: no cover + self._successive_count = 0 + self._character_count = 0 + self._last_latin_character = None + + @property + def ratio(self) -> float: + if self._character_count == 0: + return 0.0 + + return (self._successive_count * 2) / self._character_count + + +class SuspiciousRange(MessDetectorPlugin): + def __init__(self) -> None: + self._suspicious_successive_range_count: int = 0 + self._character_count: int = 0 + self._last_printable_seen: Optional[str] = None + + def eligible(self, character: str) -> bool: + return character.isprintable() + + def feed(self, character: str) -> None: + self._character_count += 1 + + if ( + character.isspace() + or is_punctuation(character) + or character in COMMON_SAFE_ASCII_CHARACTERS + ): + self._last_printable_seen = None + return + + if self._last_printable_seen is None: + self._last_printable_seen = character + return + + unicode_range_a: Optional[str] = unicode_range(self._last_printable_seen) + unicode_range_b: Optional[str] = unicode_range(character) + + if is_suspiciously_successive_range(unicode_range_a, unicode_range_b): + self._suspicious_successive_range_count += 1 + + self._last_printable_seen = character + + def reset(self) -> None: # pragma: no cover + self._character_count = 0 + self._suspicious_successive_range_count = 0 + self._last_printable_seen = None + + @property + def ratio(self) -> float: + if self._character_count <= 24: + return 0.0 + + ratio_of_suspicious_range_usage: float = ( + self._suspicious_successive_range_count * 2 + ) / self._character_count + + return ratio_of_suspicious_range_usage + + +class SuperWeirdWordPlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._word_count: int = 0 + self._bad_word_count: int = 0 + self._foreign_long_count: int = 0 + + self._is_current_word_bad: bool = False + self._foreign_long_watch: bool = False + + self._character_count: int = 0 + self._bad_character_count: int = 0 + + self._buffer: str = "" + self._buffer_accent_count: int = 0 + + def eligible(self, character: str) -> bool: + return True + + def feed(self, character: str) -> None: + if character.isalpha(): + self._buffer += character + if is_accentuated(character): + self._buffer_accent_count += 1 + if ( + self._foreign_long_watch is False + and (is_latin(character) is False or is_accentuated(character)) + and is_cjk(character) is False + and is_hangul(character) is False + and is_katakana(character) is False + and is_hiragana(character) is False + and is_thai(character) is False + ): + self._foreign_long_watch = True + return + if not self._buffer: + return + if ( + character.isspace() or is_punctuation(character) or is_separator(character) + ) and self._buffer: + self._word_count += 1 + buffer_length: int = len(self._buffer) + + self._character_count += buffer_length + + if buffer_length >= 4: + if self._buffer_accent_count / buffer_length > 0.34: + self._is_current_word_bad = True + # Word/Buffer ending with an upper case accentuated letter are so rare, + # that we will consider them all as suspicious. Same weight as foreign_long suspicious. + if ( + is_accentuated(self._buffer[-1]) + and self._buffer[-1].isupper() + and all(_.isupper() for _ in self._buffer) is False + ): + self._foreign_long_count += 1 + self._is_current_word_bad = True + if buffer_length >= 24 and self._foreign_long_watch: + camel_case_dst = [ + i + for c, i in zip(self._buffer, range(0, buffer_length)) + if c.isupper() + ] + probable_camel_cased: bool = False + + if camel_case_dst and (len(camel_case_dst) / buffer_length <= 0.3): + probable_camel_cased = True + + if not probable_camel_cased: + self._foreign_long_count += 1 + self._is_current_word_bad = True + + if self._is_current_word_bad: + self._bad_word_count += 1 + self._bad_character_count += len(self._buffer) + self._is_current_word_bad = False + + self._foreign_long_watch = False + self._buffer = "" + self._buffer_accent_count = 0 + elif ( + character not in {"<", ">", "-", "=", "~", "|", "_"} + and character.isdigit() is False + and is_symbol(character) + ): + self._is_current_word_bad = True + self._buffer += character + + def reset(self) -> None: # pragma: no cover + self._buffer = "" + self._is_current_word_bad = False + self._foreign_long_watch = False + self._bad_word_count = 0 + self._word_count = 0 + self._character_count = 0 + self._bad_character_count = 0 + self._foreign_long_count = 0 + + @property + def ratio(self) -> float: + if self._word_count <= 10 and self._foreign_long_count == 0: + return 0.0 + + return self._bad_character_count / self._character_count + + +class CjkInvalidStopPlugin(MessDetectorPlugin): + """ + GB(Chinese) based encoding often render the stop incorrectly when the content does not fit and + can be easily detected. Searching for the overuse of '丅' and '丄'. + """ + + def __init__(self) -> None: + self._wrong_stop_count: int = 0 + self._cjk_character_count: int = 0 + + def eligible(self, character: str) -> bool: + return True + + def feed(self, character: str) -> None: + if character in {"丅", "丄"}: + self._wrong_stop_count += 1 + return + if is_cjk(character): + self._cjk_character_count += 1 + + def reset(self) -> None: # pragma: no cover + self._wrong_stop_count = 0 + self._cjk_character_count = 0 + + @property + def ratio(self) -> float: + if self._cjk_character_count < 16: + return 0.0 + return self._wrong_stop_count / self._cjk_character_count + + +class ArchaicUpperLowerPlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._buf: bool = False + + self._character_count_since_last_sep: int = 0 + + self._successive_upper_lower_count: int = 0 + self._successive_upper_lower_count_final: int = 0 + + self._character_count: int = 0 + + self._last_alpha_seen: Optional[str] = None + self._current_ascii_only: bool = True + + def eligible(self, character: str) -> bool: + return True + + def feed(self, character: str) -> None: + is_concerned = character.isalpha() and is_case_variable(character) + chunk_sep = is_concerned is False + + if chunk_sep and self._character_count_since_last_sep > 0: + if ( + self._character_count_since_last_sep <= 64 + and character.isdigit() is False + and self._current_ascii_only is False + ): + self._successive_upper_lower_count_final += ( + self._successive_upper_lower_count + ) + + self._successive_upper_lower_count = 0 + self._character_count_since_last_sep = 0 + self._last_alpha_seen = None + self._buf = False + self._character_count += 1 + self._current_ascii_only = True + + return + + if self._current_ascii_only is True and character.isascii() is False: + self._current_ascii_only = False + + if self._last_alpha_seen is not None: + if (character.isupper() and self._last_alpha_seen.islower()) or ( + character.islower() and self._last_alpha_seen.isupper() + ): + if self._buf is True: + self._successive_upper_lower_count += 2 + self._buf = False + else: + self._buf = True + else: + self._buf = False + + self._character_count += 1 + self._character_count_since_last_sep += 1 + self._last_alpha_seen = character + + def reset(self) -> None: # pragma: no cover + self._character_count = 0 + self._character_count_since_last_sep = 0 + self._successive_upper_lower_count = 0 + self._successive_upper_lower_count_final = 0 + self._last_alpha_seen = None + self._buf = False + self._current_ascii_only = True + + @property + def ratio(self) -> float: + if self._character_count == 0: + return 0.0 + + return self._successive_upper_lower_count_final / self._character_count + + +class ArabicIsolatedFormPlugin(MessDetectorPlugin): + def __init__(self) -> None: + self._character_count: int = 0 + self._isolated_form_count: int = 0 + + def reset(self) -> None: # pragma: no cover + self._character_count = 0 + self._isolated_form_count = 0 + + def eligible(self, character: str) -> bool: + return is_arabic(character) + + def feed(self, character: str) -> None: + self._character_count += 1 + + if is_arabic_isolated_form(character): + self._isolated_form_count += 1 + + @property + def ratio(self) -> float: + if self._character_count < 8: + return 0.0 + + isolated_form_usage: float = self._isolated_form_count / self._character_count + + return isolated_form_usage + + +@lru_cache(maxsize=1024) +def is_suspiciously_successive_range( + unicode_range_a: Optional[str], unicode_range_b: Optional[str] +) -> bool: + """ + Determine if two Unicode range seen next to each other can be considered as suspicious. + """ + if unicode_range_a is None or unicode_range_b is None: + return True + + if unicode_range_a == unicode_range_b: + return False + + if "Latin" in unicode_range_a and "Latin" in unicode_range_b: + return False + + if "Emoticons" in unicode_range_a or "Emoticons" in unicode_range_b: + return False + + # Latin characters can be accompanied with a combining diacritical mark + # eg. Vietnamese. + if ("Latin" in unicode_range_a or "Latin" in unicode_range_b) and ( + "Combining" in unicode_range_a or "Combining" in unicode_range_b + ): + return False + + keywords_range_a, keywords_range_b = unicode_range_a.split( + " " + ), unicode_range_b.split(" ") + + for el in keywords_range_a: + if el in UNICODE_SECONDARY_RANGE_KEYWORD: + continue + if el in keywords_range_b: + return False + + # Japanese Exception + range_a_jp_chars, range_b_jp_chars = ( + unicode_range_a + in ( + "Hiragana", + "Katakana", + ), + unicode_range_b in ("Hiragana", "Katakana"), + ) + if (range_a_jp_chars or range_b_jp_chars) and ( + "CJK" in unicode_range_a or "CJK" in unicode_range_b + ): + return False + if range_a_jp_chars and range_b_jp_chars: + return False + + if "Hangul" in unicode_range_a or "Hangul" in unicode_range_b: + if "CJK" in unicode_range_a or "CJK" in unicode_range_b: + return False + if unicode_range_a == "Basic Latin" or unicode_range_b == "Basic Latin": + return False + + # Chinese/Japanese use dedicated range for punctuation and/or separators. + if ("CJK" in unicode_range_a or "CJK" in unicode_range_b) or ( + unicode_range_a in ["Katakana", "Hiragana"] + and unicode_range_b in ["Katakana", "Hiragana"] + ): + if "Punctuation" in unicode_range_a or "Punctuation" in unicode_range_b: + return False + if "Forms" in unicode_range_a or "Forms" in unicode_range_b: + return False + if unicode_range_a == "Basic Latin" or unicode_range_b == "Basic Latin": + return False + + return True + + +@lru_cache(maxsize=2048) +def mess_ratio( + decoded_sequence: str, maximum_threshold: float = 0.2, debug: bool = False +) -> float: + """ + Compute a mess ratio given a decoded bytes sequence. The maximum threshold does stop the computation earlier. + """ + + detectors: List[MessDetectorPlugin] = [ + md_class() for md_class in MessDetectorPlugin.__subclasses__() + ] + + length: int = len(decoded_sequence) + 1 + + mean_mess_ratio: float = 0.0 + + if length < 512: + intermediary_mean_mess_ratio_calc: int = 32 + elif length <= 1024: + intermediary_mean_mess_ratio_calc = 64 + else: + intermediary_mean_mess_ratio_calc = 128 + + for character, index in zip(decoded_sequence + "\n", range(length)): + for detector in detectors: + if detector.eligible(character): + detector.feed(character) + + if ( + index > 0 and index % intermediary_mean_mess_ratio_calc == 0 + ) or index == length - 1: + mean_mess_ratio = sum(dt.ratio for dt in detectors) + + if mean_mess_ratio >= maximum_threshold: + break + + if debug: + logger = getLogger("charset_normalizer") + + logger.log( + TRACE, + "Mess-detector extended-analysis start. " + f"intermediary_mean_mess_ratio_calc={intermediary_mean_mess_ratio_calc} mean_mess_ratio={mean_mess_ratio} " + f"maximum_threshold={maximum_threshold}", + ) + + if len(decoded_sequence) > 16: + logger.log(TRACE, f"Starting with: {decoded_sequence[:16]}") + logger.log(TRACE, f"Ending with: {decoded_sequence[-16::]}") + + for dt in detectors: # pragma: nocover + logger.log(TRACE, f"{dt.__class__}: {dt.ratio}") + + return round(mean_mess_ratio, 3) diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/md__mypyc.cp311-win_amd64.pyd b/templates/skills/file_manager/dependencies/charset_normalizer/md__mypyc.cp311-win_amd64.pyd new file mode 100644 index 00000000..22db98cc Binary files /dev/null and b/templates/skills/file_manager/dependencies/charset_normalizer/md__mypyc.cp311-win_amd64.pyd differ diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/models.py b/templates/skills/file_manager/dependencies/charset_normalizer/models.py new file mode 100644 index 00000000..a760b9c5 --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/models.py @@ -0,0 +1,340 @@ +from encodings.aliases import aliases +from hashlib import sha256 +from json import dumps +from typing import Any, Dict, Iterator, List, Optional, Tuple, Union + +from .constant import TOO_BIG_SEQUENCE +from .utils import iana_name, is_multi_byte_encoding, unicode_range + + +class CharsetMatch: + def __init__( + self, + payload: bytes, + guessed_encoding: str, + mean_mess_ratio: float, + has_sig_or_bom: bool, + languages: "CoherenceMatches", + decoded_payload: Optional[str] = None, + ): + self._payload: bytes = payload + + self._encoding: str = guessed_encoding + self._mean_mess_ratio: float = mean_mess_ratio + self._languages: CoherenceMatches = languages + self._has_sig_or_bom: bool = has_sig_or_bom + self._unicode_ranges: Optional[List[str]] = None + + self._leaves: List[CharsetMatch] = [] + self._mean_coherence_ratio: float = 0.0 + + self._output_payload: Optional[bytes] = None + self._output_encoding: Optional[str] = None + + self._string: Optional[str] = decoded_payload + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CharsetMatch): + raise TypeError( + "__eq__ cannot be invoked on {} and {}.".format( + str(other.__class__), str(self.__class__) + ) + ) + return self.encoding == other.encoding and self.fingerprint == other.fingerprint + + def __lt__(self, other: object) -> bool: + """ + Implemented to make sorted available upon CharsetMatches items. + """ + if not isinstance(other, CharsetMatch): + raise ValueError + + chaos_difference: float = abs(self.chaos - other.chaos) + coherence_difference: float = abs(self.coherence - other.coherence) + + # Below 1% difference --> Use Coherence + if chaos_difference < 0.01 and coherence_difference > 0.02: + return self.coherence > other.coherence + elif chaos_difference < 0.01 and coherence_difference <= 0.02: + # When having a difficult decision, use the result that decoded as many multi-byte as possible. + # preserve RAM usage! + if len(self._payload) >= TOO_BIG_SEQUENCE: + return self.chaos < other.chaos + return self.multi_byte_usage > other.multi_byte_usage + + return self.chaos < other.chaos + + @property + def multi_byte_usage(self) -> float: + return 1.0 - (len(str(self)) / len(self.raw)) + + def __str__(self) -> str: + # Lazy Str Loading + if self._string is None: + self._string = str(self._payload, self._encoding, "strict") + return self._string + + def __repr__(self) -> str: + return "".format(self.encoding, self.fingerprint) + + def add_submatch(self, other: "CharsetMatch") -> None: + if not isinstance(other, CharsetMatch) or other == self: + raise ValueError( + "Unable to add instance <{}> as a submatch of a CharsetMatch".format( + other.__class__ + ) + ) + + other._string = None # Unload RAM usage; dirty trick. + self._leaves.append(other) + + @property + def encoding(self) -> str: + return self._encoding + + @property + def encoding_aliases(self) -> List[str]: + """ + Encoding name are known by many name, using this could help when searching for IBM855 when it's listed as CP855. + """ + also_known_as: List[str] = [] + for u, p in aliases.items(): + if self.encoding == u: + also_known_as.append(p) + elif self.encoding == p: + also_known_as.append(u) + return also_known_as + + @property + def bom(self) -> bool: + return self._has_sig_or_bom + + @property + def byte_order_mark(self) -> bool: + return self._has_sig_or_bom + + @property + def languages(self) -> List[str]: + """ + Return the complete list of possible languages found in decoded sequence. + Usually not really useful. Returned list may be empty even if 'language' property return something != 'Unknown'. + """ + return [e[0] for e in self._languages] + + @property + def language(self) -> str: + """ + Most probable language found in decoded sequence. If none were detected or inferred, the property will return + "Unknown". + """ + if not self._languages: + # Trying to infer the language based on the given encoding + # Its either English or we should not pronounce ourselves in certain cases. + if "ascii" in self.could_be_from_charset: + return "English" + + # doing it there to avoid circular import + from charset_normalizer.cd import encoding_languages, mb_encoding_languages + + languages = ( + mb_encoding_languages(self.encoding) + if is_multi_byte_encoding(self.encoding) + else encoding_languages(self.encoding) + ) + + if len(languages) == 0 or "Latin Based" in languages: + return "Unknown" + + return languages[0] + + return self._languages[0][0] + + @property + def chaos(self) -> float: + return self._mean_mess_ratio + + @property + def coherence(self) -> float: + if not self._languages: + return 0.0 + return self._languages[0][1] + + @property + def percent_chaos(self) -> float: + return round(self.chaos * 100, ndigits=3) + + @property + def percent_coherence(self) -> float: + return round(self.coherence * 100, ndigits=3) + + @property + def raw(self) -> bytes: + """ + Original untouched bytes. + """ + return self._payload + + @property + def submatch(self) -> List["CharsetMatch"]: + return self._leaves + + @property + def has_submatch(self) -> bool: + return len(self._leaves) > 0 + + @property + def alphabets(self) -> List[str]: + if self._unicode_ranges is not None: + return self._unicode_ranges + # list detected ranges + detected_ranges: List[Optional[str]] = [ + unicode_range(char) for char in str(self) + ] + # filter and sort + self._unicode_ranges = sorted(list({r for r in detected_ranges if r})) + return self._unicode_ranges + + @property + def could_be_from_charset(self) -> List[str]: + """ + The complete list of encoding that output the exact SAME str result and therefore could be the originating + encoding. + This list does include the encoding available in property 'encoding'. + """ + return [self._encoding] + [m.encoding for m in self._leaves] + + def output(self, encoding: str = "utf_8") -> bytes: + """ + Method to get re-encoded bytes payload using given target encoding. Default to UTF-8. + Any errors will be simply ignored by the encoder NOT replaced. + """ + if self._output_encoding is None or self._output_encoding != encoding: + self._output_encoding = encoding + self._output_payload = str(self).encode(encoding, "replace") + + return self._output_payload # type: ignore + + @property + def fingerprint(self) -> str: + """ + Retrieve the unique SHA256 computed using the transformed (re-encoded) payload. Not the original one. + """ + return sha256(self.output()).hexdigest() + + +class CharsetMatches: + """ + Container with every CharsetMatch items ordered by default from most probable to the less one. + Act like a list(iterable) but does not implements all related methods. + """ + + def __init__(self, results: Optional[List[CharsetMatch]] = None): + self._results: List[CharsetMatch] = sorted(results) if results else [] + + def __iter__(self) -> Iterator[CharsetMatch]: + yield from self._results + + def __getitem__(self, item: Union[int, str]) -> CharsetMatch: + """ + Retrieve a single item either by its position or encoding name (alias may be used here). + Raise KeyError upon invalid index or encoding not present in results. + """ + if isinstance(item, int): + return self._results[item] + if isinstance(item, str): + item = iana_name(item, False) + for result in self._results: + if item in result.could_be_from_charset: + return result + raise KeyError + + def __len__(self) -> int: + return len(self._results) + + def __bool__(self) -> bool: + return len(self._results) > 0 + + def append(self, item: CharsetMatch) -> None: + """ + Insert a single match. Will be inserted accordingly to preserve sort. + Can be inserted as a submatch. + """ + if not isinstance(item, CharsetMatch): + raise ValueError( + "Cannot append instance '{}' to CharsetMatches".format( + str(item.__class__) + ) + ) + # We should disable the submatch factoring when the input file is too heavy (conserve RAM usage) + if len(item.raw) <= TOO_BIG_SEQUENCE: + for match in self._results: + if match.fingerprint == item.fingerprint and match.chaos == item.chaos: + match.add_submatch(item) + return + self._results.append(item) + self._results = sorted(self._results) + + def best(self) -> Optional["CharsetMatch"]: + """ + Simply return the first match. Strict equivalent to matches[0]. + """ + if not self._results: + return None + return self._results[0] + + def first(self) -> Optional["CharsetMatch"]: + """ + Redundant method, call the method best(). Kept for BC reasons. + """ + return self.best() + + +CoherenceMatch = Tuple[str, float] +CoherenceMatches = List[CoherenceMatch] + + +class CliDetectionResult: + def __init__( + self, + path: str, + encoding: Optional[str], + encoding_aliases: List[str], + alternative_encodings: List[str], + language: str, + alphabets: List[str], + has_sig_or_bom: bool, + chaos: float, + coherence: float, + unicode_path: Optional[str], + is_preferred: bool, + ): + self.path: str = path + self.unicode_path: Optional[str] = unicode_path + self.encoding: Optional[str] = encoding + self.encoding_aliases: List[str] = encoding_aliases + self.alternative_encodings: List[str] = alternative_encodings + self.language: str = language + self.alphabets: List[str] = alphabets + self.has_sig_or_bom: bool = has_sig_or_bom + self.chaos: float = chaos + self.coherence: float = coherence + self.is_preferred: bool = is_preferred + + @property + def __dict__(self) -> Dict[str, Any]: # type: ignore + return { + "path": self.path, + "encoding": self.encoding, + "encoding_aliases": self.encoding_aliases, + "alternative_encodings": self.alternative_encodings, + "language": self.language, + "alphabets": self.alphabets, + "has_sig_or_bom": self.has_sig_or_bom, + "chaos": self.chaos, + "coherence": self.coherence, + "unicode_path": self.unicode_path, + "is_preferred": self.is_preferred, + } + + def to_json(self) -> str: + return dumps(self.__dict__, ensure_ascii=True, indent=4) diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/py.typed b/templates/skills/file_manager/dependencies/charset_normalizer/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/utils.py b/templates/skills/file_manager/dependencies/charset_normalizer/utils.py new file mode 100644 index 00000000..e5cbbf4c --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/utils.py @@ -0,0 +1,421 @@ +import importlib +import logging +import unicodedata +from codecs import IncrementalDecoder +from encodings.aliases import aliases +from functools import lru_cache +from re import findall +from typing import Generator, List, Optional, Set, Tuple, Union + +from _multibytecodec import MultibyteIncrementalDecoder + +from .constant import ( + ENCODING_MARKS, + IANA_SUPPORTED_SIMILAR, + RE_POSSIBLE_ENCODING_INDICATION, + UNICODE_RANGES_COMBINED, + UNICODE_SECONDARY_RANGE_KEYWORD, + UTF8_MAXIMAL_ALLOCATION, +) + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_accentuated(character: str) -> bool: + try: + description: str = unicodedata.name(character) + except ValueError: + return False + return ( + "WITH GRAVE" in description + or "WITH ACUTE" in description + or "WITH CEDILLA" in description + or "WITH DIAERESIS" in description + or "WITH CIRCUMFLEX" in description + or "WITH TILDE" in description + or "WITH MACRON" in description + or "WITH RING ABOVE" in description + ) + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def remove_accent(character: str) -> str: + decomposed: str = unicodedata.decomposition(character) + if not decomposed: + return character + + codes: List[str] = decomposed.split(" ") + + return chr(int(codes[0], 16)) + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def unicode_range(character: str) -> Optional[str]: + """ + Retrieve the Unicode range official name from a single character. + """ + character_ord: int = ord(character) + + for range_name, ord_range in UNICODE_RANGES_COMBINED.items(): + if character_ord in ord_range: + return range_name + + return None + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_latin(character: str) -> bool: + try: + description: str = unicodedata.name(character) + except ValueError: + return False + return "LATIN" in description + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_punctuation(character: str) -> bool: + character_category: str = unicodedata.category(character) + + if "P" in character_category: + return True + + character_range: Optional[str] = unicode_range(character) + + if character_range is None: + return False + + return "Punctuation" in character_range + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_symbol(character: str) -> bool: + character_category: str = unicodedata.category(character) + + if "S" in character_category or "N" in character_category: + return True + + character_range: Optional[str] = unicode_range(character) + + if character_range is None: + return False + + return "Forms" in character_range and character_category != "Lo" + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_emoticon(character: str) -> bool: + character_range: Optional[str] = unicode_range(character) + + if character_range is None: + return False + + return "Emoticons" in character_range or "Pictographs" in character_range + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_separator(character: str) -> bool: + if character.isspace() or character in {"|", "+", "<", ">"}: + return True + + character_category: str = unicodedata.category(character) + + return "Z" in character_category or character_category in {"Po", "Pd", "Pc"} + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_case_variable(character: str) -> bool: + return character.islower() != character.isupper() + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_cjk(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "CJK" in character_name + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_hiragana(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "HIRAGANA" in character_name + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_katakana(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "KATAKANA" in character_name + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_hangul(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "HANGUL" in character_name + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_thai(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "THAI" in character_name + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_arabic(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "ARABIC" in character_name + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_arabic_isolated_form(character: str) -> bool: + try: + character_name = unicodedata.name(character) + except ValueError: + return False + + return "ARABIC" in character_name and "ISOLATED FORM" in character_name + + +@lru_cache(maxsize=len(UNICODE_RANGES_COMBINED)) +def is_unicode_range_secondary(range_name: str) -> bool: + return any(keyword in range_name for keyword in UNICODE_SECONDARY_RANGE_KEYWORD) + + +@lru_cache(maxsize=UTF8_MAXIMAL_ALLOCATION) +def is_unprintable(character: str) -> bool: + return ( + character.isspace() is False # includes \n \t \r \v + and character.isprintable() is False + and character != "\x1A" # Why? Its the ASCII substitute character. + and character != "\ufeff" # bug discovered in Python, + # Zero Width No-Break Space located in Arabic Presentation Forms-B, Unicode 1.1 not acknowledged as space. + ) + + +def any_specified_encoding(sequence: bytes, search_zone: int = 8192) -> Optional[str]: + """ + Extract using ASCII-only decoder any specified encoding in the first n-bytes. + """ + if not isinstance(sequence, bytes): + raise TypeError + + seq_len: int = len(sequence) + + results: List[str] = findall( + RE_POSSIBLE_ENCODING_INDICATION, + sequence[: min(seq_len, search_zone)].decode("ascii", errors="ignore"), + ) + + if len(results) == 0: + return None + + for specified_encoding in results: + specified_encoding = specified_encoding.lower().replace("-", "_") + + encoding_alias: str + encoding_iana: str + + for encoding_alias, encoding_iana in aliases.items(): + if encoding_alias == specified_encoding: + return encoding_iana + if encoding_iana == specified_encoding: + return encoding_iana + + return None + + +@lru_cache(maxsize=128) +def is_multi_byte_encoding(name: str) -> bool: + """ + Verify is a specific encoding is a multi byte one based on it IANA name + """ + return name in { + "utf_8", + "utf_8_sig", + "utf_16", + "utf_16_be", + "utf_16_le", + "utf_32", + "utf_32_le", + "utf_32_be", + "utf_7", + } or issubclass( + importlib.import_module("encodings.{}".format(name)).IncrementalDecoder, + MultibyteIncrementalDecoder, + ) + + +def identify_sig_or_bom(sequence: bytes) -> Tuple[Optional[str], bytes]: + """ + Identify and extract SIG/BOM in given sequence. + """ + + for iana_encoding in ENCODING_MARKS: + marks: Union[bytes, List[bytes]] = ENCODING_MARKS[iana_encoding] + + if isinstance(marks, bytes): + marks = [marks] + + for mark in marks: + if sequence.startswith(mark): + return iana_encoding, mark + + return None, b"" + + +def should_strip_sig_or_bom(iana_encoding: str) -> bool: + return iana_encoding not in {"utf_16", "utf_32"} + + +def iana_name(cp_name: str, strict: bool = True) -> str: + cp_name = cp_name.lower().replace("-", "_") + + encoding_alias: str + encoding_iana: str + + for encoding_alias, encoding_iana in aliases.items(): + if cp_name in [encoding_alias, encoding_iana]: + return encoding_iana + + if strict: + raise ValueError("Unable to retrieve IANA for '{}'".format(cp_name)) + + return cp_name + + +def range_scan(decoded_sequence: str) -> List[str]: + ranges: Set[str] = set() + + for character in decoded_sequence: + character_range: Optional[str] = unicode_range(character) + + if character_range is None: + continue + + ranges.add(character_range) + + return list(ranges) + + +def cp_similarity(iana_name_a: str, iana_name_b: str) -> float: + if is_multi_byte_encoding(iana_name_a) or is_multi_byte_encoding(iana_name_b): + return 0.0 + + decoder_a = importlib.import_module( + "encodings.{}".format(iana_name_a) + ).IncrementalDecoder + decoder_b = importlib.import_module( + "encodings.{}".format(iana_name_b) + ).IncrementalDecoder + + id_a: IncrementalDecoder = decoder_a(errors="ignore") + id_b: IncrementalDecoder = decoder_b(errors="ignore") + + character_match_count: int = 0 + + for i in range(255): + to_be_decoded: bytes = bytes([i]) + if id_a.decode(to_be_decoded) == id_b.decode(to_be_decoded): + character_match_count += 1 + + return character_match_count / 254 + + +def is_cp_similar(iana_name_a: str, iana_name_b: str) -> bool: + """ + Determine if two code page are at least 80% similar. IANA_SUPPORTED_SIMILAR dict was generated using + the function cp_similarity. + """ + return ( + iana_name_a in IANA_SUPPORTED_SIMILAR + and iana_name_b in IANA_SUPPORTED_SIMILAR[iana_name_a] + ) + + +def set_logging_handler( + name: str = "charset_normalizer", + level: int = logging.INFO, + format_string: str = "%(asctime)s | %(levelname)s | %(message)s", +) -> None: + logger = logging.getLogger(name) + logger.setLevel(level) + + handler = logging.StreamHandler() + handler.setFormatter(logging.Formatter(format_string)) + logger.addHandler(handler) + + +def cut_sequence_chunks( + sequences: bytes, + encoding_iana: str, + offsets: range, + chunk_size: int, + bom_or_sig_available: bool, + strip_sig_or_bom: bool, + sig_payload: bytes, + is_multi_byte_decoder: bool, + decoded_payload: Optional[str] = None, +) -> Generator[str, None, None]: + if decoded_payload and is_multi_byte_decoder is False: + for i in offsets: + chunk = decoded_payload[i : i + chunk_size] + if not chunk: + break + yield chunk + else: + for i in offsets: + chunk_end = i + chunk_size + if chunk_end > len(sequences) + 8: + continue + + cut_sequence = sequences[i : i + chunk_size] + + if bom_or_sig_available and strip_sig_or_bom is False: + cut_sequence = sig_payload + cut_sequence + + chunk = cut_sequence.decode( + encoding_iana, + errors="ignore" if is_multi_byte_decoder else "strict", + ) + + # multi-byte bad cutting detector and adjustment + # not the cleanest way to perform that fix but clever enough for now. + if is_multi_byte_decoder and i > 0: + chunk_partial_size_chk: int = min(chunk_size, 16) + + if ( + decoded_payload + and chunk[:chunk_partial_size_chk] not in decoded_payload + ): + for j in range(i, i - 4, -1): + cut_sequence = sequences[j:chunk_end] + + if bom_or_sig_available and strip_sig_or_bom is False: + cut_sequence = sig_payload + cut_sequence + + chunk = cut_sequence.decode(encoding_iana, errors="ignore") + + if chunk[:chunk_partial_size_chk] in decoded_payload: + break + + yield chunk diff --git a/templates/skills/file_manager/dependencies/charset_normalizer/version.py b/templates/skills/file_manager/dependencies/charset_normalizer/version.py new file mode 100644 index 00000000..5a4da4ff --- /dev/null +++ b/templates/skills/file_manager/dependencies/charset_normalizer/version.py @@ -0,0 +1,6 @@ +""" +Expose version +""" + +__version__ = "3.3.2" +VERSION = __version__.split(".") diff --git a/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/INSTALLER b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/METADATA b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/METADATA new file mode 100644 index 00000000..1173bdea --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/METADATA @@ -0,0 +1,138 @@ +Metadata-Version: 2.3 +Name: cryptography +Version: 43.0.1 +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: Apache Software License +Classifier: License :: OSI Approved :: BSD License +Classifier: Natural Language :: English +Classifier: Operating System :: MacOS :: MacOS X +Classifier: Operating System :: POSIX +Classifier: Operating System :: POSIX :: BSD +Classifier: Operating System :: POSIX :: Linux +Classifier: Operating System :: Microsoft :: Windows +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3.12 +Classifier: Programming Language :: Python :: Implementation :: CPython +Classifier: Programming Language :: Python :: Implementation :: PyPy +Classifier: Topic :: Security :: Cryptography +Requires-Dist: cffi >=1.12 ; platform_python_implementation != 'PyPy' +Requires-Dist: bcrypt >=3.1.5 ; extra == 'ssh' +Requires-Dist: nox ; extra == 'nox' +Requires-Dist: cryptography-vectors ==43.0.1 ; extra == 'test' +Requires-Dist: pytest >=6.2.0 ; extra == 'test' +Requires-Dist: pytest-benchmark ; extra == 'test' +Requires-Dist: pytest-cov ; extra == 'test' +Requires-Dist: pytest-xdist ; extra == 'test' +Requires-Dist: pretend ; extra == 'test' +Requires-Dist: certifi ; extra == 'test' +Requires-Dist: pytest-randomly ; extra == 'test-randomorder' +Requires-Dist: sphinx >=5.3.0 ; extra == 'docs' +Requires-Dist: sphinx-rtd-theme >=1.1.1 ; extra == 'docs' +Requires-Dist: pyenchant >=1.6.11 ; extra == 'docstest' +Requires-Dist: readme-renderer ; extra == 'docstest' +Requires-Dist: sphinxcontrib-spelling >=4.0.1 ; extra == 'docstest' +Requires-Dist: build ; extra == 'sdist' +Requires-Dist: ruff ; extra == 'pep8test' +Requires-Dist: mypy ; extra == 'pep8test' +Requires-Dist: check-sdist ; extra == 'pep8test' +Requires-Dist: click ; extra == 'pep8test' +Provides-Extra: ssh +Provides-Extra: nox +Provides-Extra: test +Provides-Extra: test-randomorder +Provides-Extra: docs +Provides-Extra: docstest +Provides-Extra: sdist +Provides-Extra: pep8test +License-File: LICENSE +License-File: LICENSE.APACHE +License-File: LICENSE.BSD +Summary: cryptography is a package which provides cryptographic recipes and primitives to Python developers. +Author: The cryptography developers +Author-email: The Python Cryptographic Authority and individual contributors +License: Apache-2.0 OR BSD-3-Clause +Requires-Python: >=3.7 +Description-Content-Type: text/x-rst; charset=UTF-8 +Project-URL: homepage, https://github.com/pyca/cryptography +Project-URL: documentation, https://cryptography.io/ +Project-URL: source, https://github.com/pyca/cryptography/ +Project-URL: issues, https://github.com/pyca/cryptography/issues +Project-URL: changelog, https://cryptography.io/en/latest/changelog/ + +pyca/cryptography +================= + +.. image:: https://img.shields.io/pypi/v/cryptography.svg + :target: https://pypi.org/project/cryptography/ + :alt: Latest Version + +.. image:: https://readthedocs.org/projects/cryptography/badge/?version=latest + :target: https://cryptography.io + :alt: Latest Docs + +.. image:: https://github.com/pyca/cryptography/workflows/CI/badge.svg?branch=main + :target: https://github.com/pyca/cryptography/actions?query=workflow%3ACI+branch%3Amain + + +``cryptography`` is a package which provides cryptographic recipes and +primitives to Python developers. Our goal is for it to be your "cryptographic +standard library". It supports Python 3.7+ and PyPy3 7.3.11+. + +``cryptography`` includes both high level recipes and low level interfaces to +common cryptographic algorithms such as symmetric ciphers, message digests, and +key derivation functions. For example, to encrypt something with +``cryptography``'s high level symmetric encryption recipe: + +.. code-block:: pycon + + >>> from cryptography.fernet import Fernet + >>> # Put this somewhere safe! + >>> key = Fernet.generate_key() + >>> f = Fernet(key) + >>> token = f.encrypt(b"A really secret message. Not for prying eyes.") + >>> token + b'...' + >>> f.decrypt(token) + b'A really secret message. Not for prying eyes.' + +You can find more information in the `documentation`_. + +You can install ``cryptography`` with: + +.. code-block:: console + + $ pip install cryptography + +For full details see `the installation documentation`_. + +Discussion +~~~~~~~~~~ + +If you run into bugs, you can file them in our `issue tracker`_. + +We maintain a `cryptography-dev`_ mailing list for development discussion. + +You can also join ``#pyca`` on ``irc.libera.chat`` to ask questions or get +involved. + +Security +~~~~~~~~ + +Need to report a security issue? Please consult our `security reporting`_ +documentation. + + +.. _`documentation`: https://cryptography.io/ +.. _`the installation documentation`: https://cryptography.io/en/latest/installation/ +.. _`issue tracker`: https://github.com/pyca/cryptography/issues +.. _`cryptography-dev`: https://mail.python.org/mailman/listinfo/cryptography-dev +.. _`security reporting`: https://cryptography.io/en/latest/security/ + diff --git a/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/RECORD b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/RECORD new file mode 100644 index 00000000..cecc8809 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/RECORD @@ -0,0 +1,174 @@ +cryptography-43.0.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +cryptography-43.0.1.dist-info/METADATA,sha256=AGSptf0qwYYF5RLvcScxitnPJZ6URUiMFp8jelkGAuE,5440 +cryptography-43.0.1.dist-info/RECORD,, +cryptography-43.0.1.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cryptography-43.0.1.dist-info/WHEEL,sha256=8_4EnrLvbhzH224YH8WypoB7HFn-vpbwr_zHlr3XUBI,94 +cryptography-43.0.1.dist-info/license_files/LICENSE,sha256=Pgx8CRqUi4JTO6mP18u0BDLW8amsv4X1ki0vmak65rs,197 +cryptography-43.0.1.dist-info/license_files/LICENSE.APACHE,sha256=qsc7MUj20dcRHbyjIJn2jSbGRMaBOuHk8F9leaomY_4,11360 +cryptography-43.0.1.dist-info/license_files/LICENSE.BSD,sha256=YCxMdILeZHndLpeTzaJ15eY9dz2s0eymiSMqtwCPtPs,1532 +cryptography/__about__.py,sha256=pY_pmYXjJTK-LjfCu7ot0NMj0QC2dkD1dCPyV8QjISM,445 +cryptography/__init__.py,sha256=mthuUrTd4FROCpUYrTIqhjz6s6T9djAZrV7nZ1oMm2o,364 +cryptography/__pycache__/__about__.cpython-311.pyc,, +cryptography/__pycache__/__init__.cpython-311.pyc,, +cryptography/__pycache__/exceptions.cpython-311.pyc,, +cryptography/__pycache__/fernet.cpython-311.pyc,, +cryptography/__pycache__/utils.cpython-311.pyc,, +cryptography/exceptions.py,sha256=835EWILc2fwxw-gyFMriciC2SqhViETB10LBSytnDIc,1087 +cryptography/fernet.py,sha256=aPj82w-Z_1GBXUtWRUsZdVbMwRo5Mbjj0wkA9wG4rkw,6696 +cryptography/hazmat/__init__.py,sha256=5IwrLWrVp0AjEr_4FdWG_V057NSJGY_W4egNNsuct0g,455 +cryptography/hazmat/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/__pycache__/_oid.cpython-311.pyc,, +cryptography/hazmat/_oid.py,sha256=e9yLmxtdQtuL94ztQv3SGtt_ea1Mx6aUwGftJsP6EXk,15201 +cryptography/hazmat/backends/__init__.py,sha256=O5jvKFQdZnXhKeqJ-HtulaEL9Ni7mr1mDzZY5kHlYhI,361 +cryptography/hazmat/backends/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/backends/openssl/__init__.py,sha256=p3jmJfnCag9iE5sdMrN6VvVEu55u46xaS_IjoI0SrmA,305 +cryptography/hazmat/backends/openssl/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/backends/openssl/__pycache__/backend.cpython-311.pyc,, +cryptography/hazmat/backends/openssl/backend.py,sha256=pUXUbugLwMm2Gls-h5U5fw2RvepaNjEvnao6CTmL1xQ,9648 +cryptography/hazmat/bindings/__init__.py,sha256=s9oKCQ2ycFdXoERdS1imafueSkBsL9kvbyfghaauZ9Y,180 +cryptography/hazmat/bindings/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/bindings/_rust.pyd,sha256=XyDWzsBGhQdXgZlqn1SnjcRKuOOetaK88yNONr70sZA,7900672 +cryptography/hazmat/bindings/_rust/__init__.pyi,sha256=wb1OT76lG19vjq97_q2MM3qdJlQhyloXfVbKFDmRse4,737 +cryptography/hazmat/bindings/_rust/_openssl.pyi,sha256=mpNJLuYLbCVrd5i33FBTmWwL_55Dw7JPkSLlSX9Q7oI,230 +cryptography/hazmat/bindings/_rust/asn1.pyi,sha256=BrGjC8J6nwuS-r3EVcdXJB8ndotfY9mbQYOfpbPG0HA,354 +cryptography/hazmat/bindings/_rust/exceptions.pyi,sha256=exXr2xw_0pB1kk93cYbM3MohbzoUkjOms1ZMUi0uQZE,640 +cryptography/hazmat/bindings/_rust/ocsp.pyi,sha256=R-xJ-XmJZ1lOk-fWHHvRnP3QNTCFnKv-l3xlNWfLVt4,868 +cryptography/hazmat/bindings/_rust/openssl/__init__.pyi,sha256=Lvn250QMdPyeF-hoBF6rkQgHLBJxVauXCb8i8uYTomQ,1368 +cryptography/hazmat/bindings/_rust/openssl/aead.pyi,sha256=i0gA3jUQ4rkJXTGGZrq-AuY-VQLN31lyDeWuDZ0zJYw,2553 +cryptography/hazmat/bindings/_rust/openssl/ciphers.pyi,sha256=iK0ZhQ-WyCQbjaraaFgK6q4PpD-7Rf5RDHkFD3YEW_g,1301 +cryptography/hazmat/bindings/_rust/openssl/cmac.pyi,sha256=nPH0X57RYpsAkRowVpjQiHE566ThUTx7YXrsadmrmHk,564 +cryptography/hazmat/bindings/_rust/openssl/dh.pyi,sha256=Z3TC-G04-THtSdAOPLM1h2G7ml5bda1ElZUcn5wpuhk,1564 +cryptography/hazmat/bindings/_rust/openssl/dsa.pyi,sha256=qBtkgj2albt2qFcnZ9UDrhzoNhCVO7HTby5VSf1EXMI,1299 +cryptography/hazmat/bindings/_rust/openssl/ec.pyi,sha256=zJy0pRa5n-_p2dm45PxECB_-B6SVZyNKfjxFDpPqT38,1691 +cryptography/hazmat/bindings/_rust/openssl/ed25519.pyi,sha256=OJsrblS2nHptZctva-pAKFL5q8yPEAkhmjPZpJ6TA94,493 +cryptography/hazmat/bindings/_rust/openssl/ed448.pyi,sha256=SkPHK2HdbYN02TVQEUOgW3iTdiEY7HBE4DijpdkAzmk,475 +cryptography/hazmat/bindings/_rust/openssl/hashes.pyi,sha256=J8HoN0GdtPcjRAfNHr5Elva_nkmQfq63L75_z9dd8Uc,573 +cryptography/hazmat/bindings/_rust/openssl/hmac.pyi,sha256=ZmLJ73pmxcZFC1XosWEiXMRYtvJJor3ZLdCQOJu85Cw,662 +cryptography/hazmat/bindings/_rust/openssl/kdf.pyi,sha256=wPS5c7NLspM2632II0I4iH1RSxZvSRtBOVqmpyQATfk,544 +cryptography/hazmat/bindings/_rust/openssl/keys.pyi,sha256=JSrlGNaW49ZCZ1hcb-YJdS1EAbsMwRbVEcLL0P9OApA,872 +cryptography/hazmat/bindings/_rust/openssl/poly1305.pyi,sha256=9iogF7Q4i81IkOS-IMXp6HvxFF_3cNy_ucrAjVQnn14,540 +cryptography/hazmat/bindings/_rust/openssl/rsa.pyi,sha256=2OQCNSXkxgc-3uw1xiCCloIQTV6p9_kK79Yu0rhZgPc,1364 +cryptography/hazmat/bindings/_rust/openssl/x25519.pyi,sha256=2BKdbrddM_9SMUpdvHKGhb9MNjURCarPxccbUDzHeoA,484 +cryptography/hazmat/bindings/_rust/openssl/x448.pyi,sha256=AoRMWNvCJTiH5L-lkIkCdPlrPLUdJvvfXpIvf1GmxpM,466 +cryptography/hazmat/bindings/_rust/pkcs12.pyi,sha256=afhB_6M8xI1MIE5vxkaDF1jSxA48ib1--NiOxtf6boM,1394 +cryptography/hazmat/bindings/_rust/pkcs7.pyi,sha256=QCmuA0IgDr4iOecUOXgUUeh3BAjJx8ubjz__EnNbyGY,972 +cryptography/hazmat/bindings/_rust/test_support.pyi,sha256=Xo1Gd7bh9rU4HuIS4pm9UwCY6IS1gInvFwmhABLOVO4,936 +cryptography/hazmat/bindings/_rust/x509.pyi,sha256=WLrGmqmFss8dXKhlG_J9nVhoCcodR72xJdCoxEuBtjY,3551 +cryptography/hazmat/bindings/openssl/__init__.py,sha256=s9oKCQ2ycFdXoERdS1imafueSkBsL9kvbyfghaauZ9Y,180 +cryptography/hazmat/bindings/openssl/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/bindings/openssl/__pycache__/_conditional.cpython-311.pyc,, +cryptography/hazmat/bindings/openssl/__pycache__/binding.cpython-311.pyc,, +cryptography/hazmat/bindings/openssl/_conditional.py,sha256=dkGKGU-22uR2ZKeOOwaSxEJCGaafgUjb2romWcu03QE,5163 +cryptography/hazmat/bindings/openssl/binding.py,sha256=e1gnFAZBPrkJ3CsiZV-ug6kaPdNTAEROaUFiFrUh71M,4042 +cryptography/hazmat/decrepit/__init__.py,sha256=wHCbWfaefa-fk6THSw9th9fJUsStJo7245wfFBqmduA,216 +cryptography/hazmat/decrepit/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/decrepit/ciphers/__init__.py,sha256=wHCbWfaefa-fk6THSw9th9fJUsStJo7245wfFBqmduA,216 +cryptography/hazmat/decrepit/ciphers/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/decrepit/ciphers/__pycache__/algorithms.cpython-311.pyc,, +cryptography/hazmat/decrepit/ciphers/algorithms.py,sha256=HWA4PKDS2w4D2dQoRerpLRU7Kntt5vJeJC7j--AlZVU,2520 +cryptography/hazmat/primitives/__init__.py,sha256=s9oKCQ2ycFdXoERdS1imafueSkBsL9kvbyfghaauZ9Y,180 +cryptography/hazmat/primitives/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/primitives/__pycache__/_asymmetric.cpython-311.pyc,, +cryptography/hazmat/primitives/__pycache__/_cipheralgorithm.cpython-311.pyc,, +cryptography/hazmat/primitives/__pycache__/_serialization.cpython-311.pyc,, +cryptography/hazmat/primitives/__pycache__/cmac.cpython-311.pyc,, +cryptography/hazmat/primitives/__pycache__/constant_time.cpython-311.pyc,, +cryptography/hazmat/primitives/__pycache__/hashes.cpython-311.pyc,, +cryptography/hazmat/primitives/__pycache__/hmac.cpython-311.pyc,, +cryptography/hazmat/primitives/__pycache__/keywrap.cpython-311.pyc,, +cryptography/hazmat/primitives/__pycache__/padding.cpython-311.pyc,, +cryptography/hazmat/primitives/__pycache__/poly1305.cpython-311.pyc,, +cryptography/hazmat/primitives/_asymmetric.py,sha256=RhgcouUB6HTiFDBrR1LxqkMjpUxIiNvQ1r_zJjRG6qQ,532 +cryptography/hazmat/primitives/_cipheralgorithm.py,sha256=gKa0WrLz6K4fqhnGbfBYKDSxgLxsPU0uj_EK2UT47W4,1495 +cryptography/hazmat/primitives/_serialization.py,sha256=qrozc8fw2WZSbjk3DAlSl3ResxpauwJ74ZgGoUL-mj0,5142 +cryptography/hazmat/primitives/asymmetric/__init__.py,sha256=s9oKCQ2ycFdXoERdS1imafueSkBsL9kvbyfghaauZ9Y,180 +cryptography/hazmat/primitives/asymmetric/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/dh.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/dsa.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/ec.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/ed25519.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/ed448.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/padding.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/rsa.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/types.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/utils.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/x25519.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/__pycache__/x448.cpython-311.pyc,, +cryptography/hazmat/primitives/asymmetric/dh.py,sha256=OOCjMClH1Bf14Sy7jAdwzEeCxFPb8XUe2qePbExvXwc,3420 +cryptography/hazmat/primitives/asymmetric/dsa.py,sha256=xBwdf0pZOgvqjUKcO7Q0L3NxwalYj0SJDUqThemhSmI,3945 +cryptography/hazmat/primitives/asymmetric/ec.py,sha256=lwZmtAwi3PM8lsY1MsNaby_bVi--49OCxwE_1yqKC-A,10428 +cryptography/hazmat/primitives/asymmetric/ed25519.py,sha256=kl63fg7myuMjNTmMoVFeH6iVr0x5FkjNmggxIRTloJk,3423 +cryptography/hazmat/primitives/asymmetric/ed448.py,sha256=2UzEDzzfkPn83UFVFlMZfIMbAixxY09WmQyrwinWTn8,3456 +cryptography/hazmat/primitives/asymmetric/padding.py,sha256=eZcvUqVLbe3u48SunLdeniaPlV4-k6pwBl67OW4jSy8,2885 +cryptography/hazmat/primitives/asymmetric/rsa.py,sha256=nW_Ko7PID9UBJF10GVJOc_1L00ymFsfZDUJYtM5kfGQ,7637 +cryptography/hazmat/primitives/asymmetric/types.py,sha256=LnsOJym-wmPUJ7Knu_7bCNU3kIiELCd6krOaW_JU08I,2996 +cryptography/hazmat/primitives/asymmetric/utils.py,sha256=DPTs6T4F-UhwzFQTh-1fSEpQzazH2jf2xpIro3ItF4o,790 +cryptography/hazmat/primitives/asymmetric/x25519.py,sha256=VGYuRdIYuVBtizpFdNWd2bTrT10JRa1admQdBr08xz8,3341 +cryptography/hazmat/primitives/asymmetric/x448.py,sha256=GKKJBqYLr03VewMF18bXIM941aaWcZIQ4rC02GLLEmw,3374 +cryptography/hazmat/primitives/ciphers/__init__.py,sha256=eyEXmjk6_CZXaOPYDr7vAYGXr29QvzgWL2-4CSolLFs,680 +cryptography/hazmat/primitives/ciphers/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/primitives/ciphers/__pycache__/aead.cpython-311.pyc,, +cryptography/hazmat/primitives/ciphers/__pycache__/algorithms.cpython-311.pyc,, +cryptography/hazmat/primitives/ciphers/__pycache__/base.cpython-311.pyc,, +cryptography/hazmat/primitives/ciphers/__pycache__/modes.cpython-311.pyc,, +cryptography/hazmat/primitives/ciphers/aead.py,sha256=Fzlyx7w8KYQakzDp1zWgJnIr62zgZrgVh1u2h4exB54,634 +cryptography/hazmat/primitives/ciphers/algorithms.py,sha256=QvBMDmphRZfNmykij58L5eDkd_2NnCzIpJpyX2QwMxc,4223 +cryptography/hazmat/primitives/ciphers/base.py,sha256=tg-XNaKUyETBi7ounGDEL1_ICn-s4FF9LR7moV58blI,4211 +cryptography/hazmat/primitives/ciphers/modes.py,sha256=BFpxEGSaxoeZjrQ4sqpyPDvKClrqfDKIBv7kYtFURhE,8192 +cryptography/hazmat/primitives/cmac.py,sha256=sz_s6H_cYnOvx-VNWdIKhRhe3Ymp8z8J0D3CBqOX3gg,338 +cryptography/hazmat/primitives/constant_time.py,sha256=xdunWT0nf8OvKdcqUhhlFKayGp4_PgVJRU2W1wLSr_A,422 +cryptography/hazmat/primitives/hashes.py,sha256=EvDIJBhj83Z7f-oHbsA0TzZLFSDV_Yv8hQRdM4o8FD0,5091 +cryptography/hazmat/primitives/hmac.py,sha256=RpB3z9z5skirCQrm7zQbtnp9pLMnAjrlTUvKqF5aDDc,423 +cryptography/hazmat/primitives/kdf/__init__.py,sha256=4XibZnrYq4hh5xBjWiIXzaYW6FKx8hPbVaa_cB9zS64,750 +cryptography/hazmat/primitives/kdf/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/concatkdf.cpython-311.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/hkdf.cpython-311.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/kbkdf.cpython-311.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/pbkdf2.cpython-311.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/scrypt.cpython-311.pyc,, +cryptography/hazmat/primitives/kdf/__pycache__/x963kdf.cpython-311.pyc,, +cryptography/hazmat/primitives/kdf/concatkdf.py,sha256=bcn4NGXse-EsFl7nlU83e5ilop7TSHcX-CJJS107W80,3686 +cryptography/hazmat/primitives/kdf/hkdf.py,sha256=uhN5L87w4JvtAqQcPh_Ji2TPSc18IDThpaYJiHOWy3A,3015 +cryptography/hazmat/primitives/kdf/kbkdf.py,sha256=eSuLK1sATkamgCAit794jLr7sDNlu5X0USdcWhwJdmk,9146 +cryptography/hazmat/primitives/kdf/pbkdf2.py,sha256=Xj3YIeX30h2BUaoJAtOo1RMXV_em0-eCG0PU_0FHJzM,1950 +cryptography/hazmat/primitives/kdf/scrypt.py,sha256=4QONhjxA_ZtuQtQ7QV3FnbB8ftrFnM52B4HPfV7hFys,2354 +cryptography/hazmat/primitives/kdf/x963kdf.py,sha256=wCpWmwQjZ2vAu2rlk3R_PX0nINl8WGXYBmlyMOC5iPw,1992 +cryptography/hazmat/primitives/keywrap.py,sha256=XV4Pj2fqSeD-RqZVvY2cA3j5_7RwJSFygYuLfk2ujCo,5650 +cryptography/hazmat/primitives/padding.py,sha256=QUq0n-EAgEan9aQzuTsiJYGKbWiK1nSHkcYjDF1L1ok,5518 +cryptography/hazmat/primitives/poly1305.py,sha256=P5EPQV-RB_FJPahpg01u0Ts4S_PnAmsroxIGXbGeRRo,355 +cryptography/hazmat/primitives/serialization/__init__.py,sha256=jyNx_7NcOEbVRBY4nP9ks0IVXBafbcYnTK27vafPLW8,1653 +cryptography/hazmat/primitives/serialization/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/primitives/serialization/__pycache__/base.cpython-311.pyc,, +cryptography/hazmat/primitives/serialization/__pycache__/pkcs12.cpython-311.pyc,, +cryptography/hazmat/primitives/serialization/__pycache__/pkcs7.cpython-311.pyc,, +cryptography/hazmat/primitives/serialization/__pycache__/ssh.cpython-311.pyc,, +cryptography/hazmat/primitives/serialization/base.py,sha256=ikq5MJIwp_oUnjiaBco_PmQwOTYuGi-XkYUYHKy8Vo0,615 +cryptography/hazmat/primitives/serialization/pkcs12.py,sha256=7vVXbiP7qhhvKAHJT_M8-LBZdbpOwrpWRHWxNrNqzXE,4492 +cryptography/hazmat/primitives/serialization/pkcs7.py,sha256=CNzcsuDMyEFMe3EUii4NfJlQzmakB2hLlfRFYObnHRs,11141 +cryptography/hazmat/primitives/serialization/ssh.py,sha256=VKscMrVdYK5B9PQISjjdRMglRvqa_L3sDNm5vdjVHJY,51915 +cryptography/hazmat/primitives/twofactor/__init__.py,sha256=tmMZGB-g4IU1r7lIFqASU019zr0uPp_wEBYcwdDCKCA,258 +cryptography/hazmat/primitives/twofactor/__pycache__/__init__.cpython-311.pyc,, +cryptography/hazmat/primitives/twofactor/__pycache__/hotp.cpython-311.pyc,, +cryptography/hazmat/primitives/twofactor/__pycache__/totp.cpython-311.pyc,, +cryptography/hazmat/primitives/twofactor/hotp.py,sha256=l1YdRMIhfPIuHKkA66keBDHhNbnBAlh6-O44P-OHIK8,2976 +cryptography/hazmat/primitives/twofactor/totp.py,sha256=v0y0xKwtYrP83ypOo5Ofd441RJLOkaFfjmp554jo5F0,1450 +cryptography/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +cryptography/utils.py,sha256=Rp7ppg4XIBVVzNQ6XngGndwkICJoYp6FoFOOgTWLJ7g,3925 +cryptography/x509/__init__.py,sha256=uGdiViR7KFnWGoJFVUStt-e_ufomWc87RQBGAZ7dT-4,7980 +cryptography/x509/__pycache__/__init__.cpython-311.pyc,, +cryptography/x509/__pycache__/base.cpython-311.pyc,, +cryptography/x509/__pycache__/certificate_transparency.cpython-311.pyc,, +cryptography/x509/__pycache__/extensions.cpython-311.pyc,, +cryptography/x509/__pycache__/general_name.cpython-311.pyc,, +cryptography/x509/__pycache__/name.cpython-311.pyc,, +cryptography/x509/__pycache__/ocsp.cpython-311.pyc,, +cryptography/x509/__pycache__/oid.cpython-311.pyc,, +cryptography/x509/__pycache__/verification.cpython-311.pyc,, +cryptography/x509/base.py,sha256=3NbbUn9wPruhmoPO7Cl3trc3SrqV2OFIBBE0P2l05mg,37081 +cryptography/x509/certificate_transparency.py,sha256=6HvzAD0dlSQVxy6tnDhGj0-pisp1MaJ9bxQNRr92inI,2261 +cryptography/x509/extensions.py,sha256=R70KkJ_c5NQ6Kx7Rho0sGJ0Rh-bOuBHjVOFSQGRAFCs,67370 +cryptography/x509/general_name.py,sha256=sP_rV11Qlpsk4x3XXGJY_Mv0Q_s9dtjeLckHsjpLQoQ,7836 +cryptography/x509/name.py,sha256=MYCxCSTQTpzhjxFPZaANqJ9fGrhESH73vPkoay8HSWM,14830 +cryptography/x509/ocsp.py,sha256=P6A02msz5pe-IkUFpvxezHvnEHGvPdXiD3S0wsuf4-I,20003 +cryptography/x509/oid.py,sha256=X8EbhkRTLrGuv9vHZSGqPd9zpvRVsonU_joWAL5LLY8,885 +cryptography/x509/verification.py,sha256=alfx3VaTSb2bMz7_7s788oL90vzgHwBjVINssdz0Gv0,796 diff --git a/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/REQUESTED b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/REQUESTED new file mode 100644 index 00000000..e69de29b diff --git a/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/WHEEL b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/WHEEL new file mode 100644 index 00000000..cf859433 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/WHEEL @@ -0,0 +1,4 @@ +Wheel-Version: 1.0 +Generator: maturin (1.7.0) +Root-Is-Purelib: false +Tag: cp39-abi3-win_amd64 diff --git a/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/license_files/LICENSE b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/license_files/LICENSE new file mode 100644 index 00000000..b11f379e --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/license_files/LICENSE @@ -0,0 +1,3 @@ +This software is made available under the terms of *either* of the licenses +found in LICENSE.APACHE or LICENSE.BSD. Contributions to cryptography are made +under the terms of *both* these licenses. diff --git a/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/license_files/LICENSE.APACHE b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/license_files/LICENSE.APACHE new file mode 100644 index 00000000..62589edd --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/license_files/LICENSE.APACHE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + https://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + https://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/license_files/LICENSE.BSD b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/license_files/LICENSE.BSD new file mode 100644 index 00000000..ec1a29d3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography-43.0.1.dist-info/license_files/LICENSE.BSD @@ -0,0 +1,27 @@ +Copyright (c) Individual contributors. +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. + + 3. Neither the name of PyCA Cryptography nor the names of its contributors + may be used to endorse or promote products derived from this software + without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR +ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES +(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; +LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON +ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS +SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/templates/skills/file_manager/dependencies/cryptography/__about__.py b/templates/skills/file_manager/dependencies/cryptography/__about__.py new file mode 100644 index 00000000..d224b2fc --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/__about__.py @@ -0,0 +1,17 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +__all__ = [ + "__author__", + "__copyright__", + "__version__", +] + +__version__ = "43.0.1" + + +__author__ = "The Python Cryptographic Authority and individual contributors" +__copyright__ = f"Copyright 2013-2024 {__author__}" diff --git a/templates/skills/file_manager/dependencies/cryptography/__init__.py b/templates/skills/file_manager/dependencies/cryptography/__init__.py new file mode 100644 index 00000000..d374f752 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/__init__.py @@ -0,0 +1,13 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.__about__ import __author__, __copyright__, __version__ + +__all__ = [ + "__author__", + "__copyright__", + "__version__", +] diff --git a/templates/skills/file_manager/dependencies/cryptography/exceptions.py b/templates/skills/file_manager/dependencies/cryptography/exceptions.py new file mode 100644 index 00000000..fe125ea9 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/exceptions.py @@ -0,0 +1,52 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import typing + +from cryptography.hazmat.bindings._rust import exceptions as rust_exceptions + +if typing.TYPE_CHECKING: + from cryptography.hazmat.bindings._rust import openssl as rust_openssl + +_Reasons = rust_exceptions._Reasons + + +class UnsupportedAlgorithm(Exception): + def __init__(self, message: str, reason: _Reasons | None = None) -> None: + super().__init__(message) + self._reason = reason + + +class AlreadyFinalized(Exception): + pass + + +class AlreadyUpdated(Exception): + pass + + +class NotYetFinalized(Exception): + pass + + +class InvalidTag(Exception): + pass + + +class InvalidSignature(Exception): + pass + + +class InternalError(Exception): + def __init__( + self, msg: str, err_code: list[rust_openssl.OpenSSLError] + ) -> None: + super().__init__(msg) + self.err_code = err_code + + +class InvalidKey(Exception): + pass diff --git a/templates/skills/file_manager/dependencies/cryptography/fernet.py b/templates/skills/file_manager/dependencies/cryptography/fernet.py new file mode 100644 index 00000000..35ce1131 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/fernet.py @@ -0,0 +1,215 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import base64 +import binascii +import os +import time +import typing + +from cryptography import utils +from cryptography.exceptions import InvalidSignature +from cryptography.hazmat.primitives import hashes, padding +from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes +from cryptography.hazmat.primitives.hmac import HMAC + + +class InvalidToken(Exception): + pass + + +_MAX_CLOCK_SKEW = 60 + + +class Fernet: + def __init__( + self, + key: bytes | str, + backend: typing.Any = None, + ) -> None: + try: + key = base64.urlsafe_b64decode(key) + except binascii.Error as exc: + raise ValueError( + "Fernet key must be 32 url-safe base64-encoded bytes." + ) from exc + if len(key) != 32: + raise ValueError( + "Fernet key must be 32 url-safe base64-encoded bytes." + ) + + self._signing_key = key[:16] + self._encryption_key = key[16:] + + @classmethod + def generate_key(cls) -> bytes: + return base64.urlsafe_b64encode(os.urandom(32)) + + def encrypt(self, data: bytes) -> bytes: + return self.encrypt_at_time(data, int(time.time())) + + def encrypt_at_time(self, data: bytes, current_time: int) -> bytes: + iv = os.urandom(16) + return self._encrypt_from_parts(data, current_time, iv) + + def _encrypt_from_parts( + self, data: bytes, current_time: int, iv: bytes + ) -> bytes: + utils._check_bytes("data", data) + + padder = padding.PKCS7(algorithms.AES.block_size).padder() + padded_data = padder.update(data) + padder.finalize() + encryptor = Cipher( + algorithms.AES(self._encryption_key), + modes.CBC(iv), + ).encryptor() + ciphertext = encryptor.update(padded_data) + encryptor.finalize() + + basic_parts = ( + b"\x80" + + current_time.to_bytes(length=8, byteorder="big") + + iv + + ciphertext + ) + + h = HMAC(self._signing_key, hashes.SHA256()) + h.update(basic_parts) + hmac = h.finalize() + return base64.urlsafe_b64encode(basic_parts + hmac) + + def decrypt(self, token: bytes | str, ttl: int | None = None) -> bytes: + timestamp, data = Fernet._get_unverified_token_data(token) + if ttl is None: + time_info = None + else: + time_info = (ttl, int(time.time())) + return self._decrypt_data(data, timestamp, time_info) + + def decrypt_at_time( + self, token: bytes | str, ttl: int, current_time: int + ) -> bytes: + if ttl is None: + raise ValueError( + "decrypt_at_time() can only be used with a non-None ttl" + ) + timestamp, data = Fernet._get_unverified_token_data(token) + return self._decrypt_data(data, timestamp, (ttl, current_time)) + + def extract_timestamp(self, token: bytes | str) -> int: + timestamp, data = Fernet._get_unverified_token_data(token) + # Verify the token was not tampered with. + self._verify_signature(data) + return timestamp + + @staticmethod + def _get_unverified_token_data(token: bytes | str) -> tuple[int, bytes]: + if not isinstance(token, (str, bytes)): + raise TypeError("token must be bytes or str") + + try: + data = base64.urlsafe_b64decode(token) + except (TypeError, binascii.Error): + raise InvalidToken + + if not data or data[0] != 0x80: + raise InvalidToken + + if len(data) < 9: + raise InvalidToken + + timestamp = int.from_bytes(data[1:9], byteorder="big") + return timestamp, data + + def _verify_signature(self, data: bytes) -> None: + h = HMAC(self._signing_key, hashes.SHA256()) + h.update(data[:-32]) + try: + h.verify(data[-32:]) + except InvalidSignature: + raise InvalidToken + + def _decrypt_data( + self, + data: bytes, + timestamp: int, + time_info: tuple[int, int] | None, + ) -> bytes: + if time_info is not None: + ttl, current_time = time_info + if timestamp + ttl < current_time: + raise InvalidToken + + if current_time + _MAX_CLOCK_SKEW < timestamp: + raise InvalidToken + + self._verify_signature(data) + + iv = data[9:25] + ciphertext = data[25:-32] + decryptor = Cipher( + algorithms.AES(self._encryption_key), modes.CBC(iv) + ).decryptor() + plaintext_padded = decryptor.update(ciphertext) + try: + plaintext_padded += decryptor.finalize() + except ValueError: + raise InvalidToken + unpadder = padding.PKCS7(algorithms.AES.block_size).unpadder() + + unpadded = unpadder.update(plaintext_padded) + try: + unpadded += unpadder.finalize() + except ValueError: + raise InvalidToken + return unpadded + + +class MultiFernet: + def __init__(self, fernets: typing.Iterable[Fernet]): + fernets = list(fernets) + if not fernets: + raise ValueError( + "MultiFernet requires at least one Fernet instance" + ) + self._fernets = fernets + + def encrypt(self, msg: bytes) -> bytes: + return self.encrypt_at_time(msg, int(time.time())) + + def encrypt_at_time(self, msg: bytes, current_time: int) -> bytes: + return self._fernets[0].encrypt_at_time(msg, current_time) + + def rotate(self, msg: bytes | str) -> bytes: + timestamp, data = Fernet._get_unverified_token_data(msg) + for f in self._fernets: + try: + p = f._decrypt_data(data, timestamp, None) + break + except InvalidToken: + pass + else: + raise InvalidToken + + iv = os.urandom(16) + return self._fernets[0]._encrypt_from_parts(p, timestamp, iv) + + def decrypt(self, msg: bytes | str, ttl: int | None = None) -> bytes: + for f in self._fernets: + try: + return f.decrypt(msg, ttl) + except InvalidToken: + pass + raise InvalidToken + + def decrypt_at_time( + self, msg: bytes | str, ttl: int, current_time: int + ) -> bytes: + for f in self._fernets: + try: + return f.decrypt_at_time(msg, ttl, current_time) + except InvalidToken: + pass + raise InvalidToken diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/__init__.py new file mode 100644 index 00000000..b9f11870 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/__init__.py @@ -0,0 +1,13 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +""" +Hazardous Materials + +This is a "Hazardous Materials" module. You should ONLY use it if you're +100% absolutely sure that you know what you're doing because this module +is full of land mines, dragons, and dinosaurs with laser guns. +""" diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/_oid.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/_oid.py new file mode 100644 index 00000000..fd5e37d9 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/_oid.py @@ -0,0 +1,313 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat.bindings._rust import ( + ObjectIdentifier as ObjectIdentifier, +) +from cryptography.hazmat.primitives import hashes + + +class ExtensionOID: + SUBJECT_DIRECTORY_ATTRIBUTES = ObjectIdentifier("2.5.29.9") + SUBJECT_KEY_IDENTIFIER = ObjectIdentifier("2.5.29.14") + KEY_USAGE = ObjectIdentifier("2.5.29.15") + SUBJECT_ALTERNATIVE_NAME = ObjectIdentifier("2.5.29.17") + ISSUER_ALTERNATIVE_NAME = ObjectIdentifier("2.5.29.18") + BASIC_CONSTRAINTS = ObjectIdentifier("2.5.29.19") + NAME_CONSTRAINTS = ObjectIdentifier("2.5.29.30") + CRL_DISTRIBUTION_POINTS = ObjectIdentifier("2.5.29.31") + CERTIFICATE_POLICIES = ObjectIdentifier("2.5.29.32") + POLICY_MAPPINGS = ObjectIdentifier("2.5.29.33") + AUTHORITY_KEY_IDENTIFIER = ObjectIdentifier("2.5.29.35") + POLICY_CONSTRAINTS = ObjectIdentifier("2.5.29.36") + EXTENDED_KEY_USAGE = ObjectIdentifier("2.5.29.37") + FRESHEST_CRL = ObjectIdentifier("2.5.29.46") + INHIBIT_ANY_POLICY = ObjectIdentifier("2.5.29.54") + ISSUING_DISTRIBUTION_POINT = ObjectIdentifier("2.5.29.28") + AUTHORITY_INFORMATION_ACCESS = ObjectIdentifier("1.3.6.1.5.5.7.1.1") + SUBJECT_INFORMATION_ACCESS = ObjectIdentifier("1.3.6.1.5.5.7.1.11") + OCSP_NO_CHECK = ObjectIdentifier("1.3.6.1.5.5.7.48.1.5") + TLS_FEATURE = ObjectIdentifier("1.3.6.1.5.5.7.1.24") + CRL_NUMBER = ObjectIdentifier("2.5.29.20") + DELTA_CRL_INDICATOR = ObjectIdentifier("2.5.29.27") + PRECERT_SIGNED_CERTIFICATE_TIMESTAMPS = ObjectIdentifier( + "1.3.6.1.4.1.11129.2.4.2" + ) + PRECERT_POISON = ObjectIdentifier("1.3.6.1.4.1.11129.2.4.3") + SIGNED_CERTIFICATE_TIMESTAMPS = ObjectIdentifier("1.3.6.1.4.1.11129.2.4.5") + MS_CERTIFICATE_TEMPLATE = ObjectIdentifier("1.3.6.1.4.1.311.21.7") + + +class OCSPExtensionOID: + NONCE = ObjectIdentifier("1.3.6.1.5.5.7.48.1.2") + ACCEPTABLE_RESPONSES = ObjectIdentifier("1.3.6.1.5.5.7.48.1.4") + + +class CRLEntryExtensionOID: + CERTIFICATE_ISSUER = ObjectIdentifier("2.5.29.29") + CRL_REASON = ObjectIdentifier("2.5.29.21") + INVALIDITY_DATE = ObjectIdentifier("2.5.29.24") + + +class NameOID: + COMMON_NAME = ObjectIdentifier("2.5.4.3") + COUNTRY_NAME = ObjectIdentifier("2.5.4.6") + LOCALITY_NAME = ObjectIdentifier("2.5.4.7") + STATE_OR_PROVINCE_NAME = ObjectIdentifier("2.5.4.8") + STREET_ADDRESS = ObjectIdentifier("2.5.4.9") + ORGANIZATION_IDENTIFIER = ObjectIdentifier("2.5.4.97") + ORGANIZATION_NAME = ObjectIdentifier("2.5.4.10") + ORGANIZATIONAL_UNIT_NAME = ObjectIdentifier("2.5.4.11") + SERIAL_NUMBER = ObjectIdentifier("2.5.4.5") + SURNAME = ObjectIdentifier("2.5.4.4") + GIVEN_NAME = ObjectIdentifier("2.5.4.42") + TITLE = ObjectIdentifier("2.5.4.12") + INITIALS = ObjectIdentifier("2.5.4.43") + GENERATION_QUALIFIER = ObjectIdentifier("2.5.4.44") + X500_UNIQUE_IDENTIFIER = ObjectIdentifier("2.5.4.45") + DN_QUALIFIER = ObjectIdentifier("2.5.4.46") + PSEUDONYM = ObjectIdentifier("2.5.4.65") + USER_ID = ObjectIdentifier("0.9.2342.19200300.100.1.1") + DOMAIN_COMPONENT = ObjectIdentifier("0.9.2342.19200300.100.1.25") + EMAIL_ADDRESS = ObjectIdentifier("1.2.840.113549.1.9.1") + JURISDICTION_COUNTRY_NAME = ObjectIdentifier("1.3.6.1.4.1.311.60.2.1.3") + JURISDICTION_LOCALITY_NAME = ObjectIdentifier("1.3.6.1.4.1.311.60.2.1.1") + JURISDICTION_STATE_OR_PROVINCE_NAME = ObjectIdentifier( + "1.3.6.1.4.1.311.60.2.1.2" + ) + BUSINESS_CATEGORY = ObjectIdentifier("2.5.4.15") + POSTAL_ADDRESS = ObjectIdentifier("2.5.4.16") + POSTAL_CODE = ObjectIdentifier("2.5.4.17") + INN = ObjectIdentifier("1.2.643.3.131.1.1") + OGRN = ObjectIdentifier("1.2.643.100.1") + SNILS = ObjectIdentifier("1.2.643.100.3") + UNSTRUCTURED_NAME = ObjectIdentifier("1.2.840.113549.1.9.2") + + +class SignatureAlgorithmOID: + RSA_WITH_MD5 = ObjectIdentifier("1.2.840.113549.1.1.4") + RSA_WITH_SHA1 = ObjectIdentifier("1.2.840.113549.1.1.5") + # This is an alternate OID for RSA with SHA1 that is occasionally seen + _RSA_WITH_SHA1 = ObjectIdentifier("1.3.14.3.2.29") + RSA_WITH_SHA224 = ObjectIdentifier("1.2.840.113549.1.1.14") + RSA_WITH_SHA256 = ObjectIdentifier("1.2.840.113549.1.1.11") + RSA_WITH_SHA384 = ObjectIdentifier("1.2.840.113549.1.1.12") + RSA_WITH_SHA512 = ObjectIdentifier("1.2.840.113549.1.1.13") + RSA_WITH_SHA3_224 = ObjectIdentifier("2.16.840.1.101.3.4.3.13") + RSA_WITH_SHA3_256 = ObjectIdentifier("2.16.840.1.101.3.4.3.14") + RSA_WITH_SHA3_384 = ObjectIdentifier("2.16.840.1.101.3.4.3.15") + RSA_WITH_SHA3_512 = ObjectIdentifier("2.16.840.1.101.3.4.3.16") + RSASSA_PSS = ObjectIdentifier("1.2.840.113549.1.1.10") + ECDSA_WITH_SHA1 = ObjectIdentifier("1.2.840.10045.4.1") + ECDSA_WITH_SHA224 = ObjectIdentifier("1.2.840.10045.4.3.1") + ECDSA_WITH_SHA256 = ObjectIdentifier("1.2.840.10045.4.3.2") + ECDSA_WITH_SHA384 = ObjectIdentifier("1.2.840.10045.4.3.3") + ECDSA_WITH_SHA512 = ObjectIdentifier("1.2.840.10045.4.3.4") + ECDSA_WITH_SHA3_224 = ObjectIdentifier("2.16.840.1.101.3.4.3.9") + ECDSA_WITH_SHA3_256 = ObjectIdentifier("2.16.840.1.101.3.4.3.10") + ECDSA_WITH_SHA3_384 = ObjectIdentifier("2.16.840.1.101.3.4.3.11") + ECDSA_WITH_SHA3_512 = ObjectIdentifier("2.16.840.1.101.3.4.3.12") + DSA_WITH_SHA1 = ObjectIdentifier("1.2.840.10040.4.3") + DSA_WITH_SHA224 = ObjectIdentifier("2.16.840.1.101.3.4.3.1") + DSA_WITH_SHA256 = ObjectIdentifier("2.16.840.1.101.3.4.3.2") + DSA_WITH_SHA384 = ObjectIdentifier("2.16.840.1.101.3.4.3.3") + DSA_WITH_SHA512 = ObjectIdentifier("2.16.840.1.101.3.4.3.4") + ED25519 = ObjectIdentifier("1.3.101.112") + ED448 = ObjectIdentifier("1.3.101.113") + GOSTR3411_94_WITH_3410_2001 = ObjectIdentifier("1.2.643.2.2.3") + GOSTR3410_2012_WITH_3411_2012_256 = ObjectIdentifier("1.2.643.7.1.1.3.2") + GOSTR3410_2012_WITH_3411_2012_512 = ObjectIdentifier("1.2.643.7.1.1.3.3") + + +_SIG_OIDS_TO_HASH: dict[ObjectIdentifier, hashes.HashAlgorithm | None] = { + SignatureAlgorithmOID.RSA_WITH_MD5: hashes.MD5(), + SignatureAlgorithmOID.RSA_WITH_SHA1: hashes.SHA1(), + SignatureAlgorithmOID._RSA_WITH_SHA1: hashes.SHA1(), + SignatureAlgorithmOID.RSA_WITH_SHA224: hashes.SHA224(), + SignatureAlgorithmOID.RSA_WITH_SHA256: hashes.SHA256(), + SignatureAlgorithmOID.RSA_WITH_SHA384: hashes.SHA384(), + SignatureAlgorithmOID.RSA_WITH_SHA512: hashes.SHA512(), + SignatureAlgorithmOID.RSA_WITH_SHA3_224: hashes.SHA3_224(), + SignatureAlgorithmOID.RSA_WITH_SHA3_256: hashes.SHA3_256(), + SignatureAlgorithmOID.RSA_WITH_SHA3_384: hashes.SHA3_384(), + SignatureAlgorithmOID.RSA_WITH_SHA3_512: hashes.SHA3_512(), + SignatureAlgorithmOID.ECDSA_WITH_SHA1: hashes.SHA1(), + SignatureAlgorithmOID.ECDSA_WITH_SHA224: hashes.SHA224(), + SignatureAlgorithmOID.ECDSA_WITH_SHA256: hashes.SHA256(), + SignatureAlgorithmOID.ECDSA_WITH_SHA384: hashes.SHA384(), + SignatureAlgorithmOID.ECDSA_WITH_SHA512: hashes.SHA512(), + SignatureAlgorithmOID.ECDSA_WITH_SHA3_224: hashes.SHA3_224(), + SignatureAlgorithmOID.ECDSA_WITH_SHA3_256: hashes.SHA3_256(), + SignatureAlgorithmOID.ECDSA_WITH_SHA3_384: hashes.SHA3_384(), + SignatureAlgorithmOID.ECDSA_WITH_SHA3_512: hashes.SHA3_512(), + SignatureAlgorithmOID.DSA_WITH_SHA1: hashes.SHA1(), + SignatureAlgorithmOID.DSA_WITH_SHA224: hashes.SHA224(), + SignatureAlgorithmOID.DSA_WITH_SHA256: hashes.SHA256(), + SignatureAlgorithmOID.ED25519: None, + SignatureAlgorithmOID.ED448: None, + SignatureAlgorithmOID.GOSTR3411_94_WITH_3410_2001: None, + SignatureAlgorithmOID.GOSTR3410_2012_WITH_3411_2012_256: None, + SignatureAlgorithmOID.GOSTR3410_2012_WITH_3411_2012_512: None, +} + + +class PublicKeyAlgorithmOID: + DSA = ObjectIdentifier("1.2.840.10040.4.1") + EC_PUBLIC_KEY = ObjectIdentifier("1.2.840.10045.2.1") + RSAES_PKCS1_v1_5 = ObjectIdentifier("1.2.840.113549.1.1.1") + RSASSA_PSS = ObjectIdentifier("1.2.840.113549.1.1.10") + X25519 = ObjectIdentifier("1.3.101.110") + X448 = ObjectIdentifier("1.3.101.111") + ED25519 = ObjectIdentifier("1.3.101.112") + ED448 = ObjectIdentifier("1.3.101.113") + + +class ExtendedKeyUsageOID: + SERVER_AUTH = ObjectIdentifier("1.3.6.1.5.5.7.3.1") + CLIENT_AUTH = ObjectIdentifier("1.3.6.1.5.5.7.3.2") + CODE_SIGNING = ObjectIdentifier("1.3.6.1.5.5.7.3.3") + EMAIL_PROTECTION = ObjectIdentifier("1.3.6.1.5.5.7.3.4") + TIME_STAMPING = ObjectIdentifier("1.3.6.1.5.5.7.3.8") + OCSP_SIGNING = ObjectIdentifier("1.3.6.1.5.5.7.3.9") + ANY_EXTENDED_KEY_USAGE = ObjectIdentifier("2.5.29.37.0") + SMARTCARD_LOGON = ObjectIdentifier("1.3.6.1.4.1.311.20.2.2") + KERBEROS_PKINIT_KDC = ObjectIdentifier("1.3.6.1.5.2.3.5") + IPSEC_IKE = ObjectIdentifier("1.3.6.1.5.5.7.3.17") + CERTIFICATE_TRANSPARENCY = ObjectIdentifier("1.3.6.1.4.1.11129.2.4.4") + + +class AuthorityInformationAccessOID: + CA_ISSUERS = ObjectIdentifier("1.3.6.1.5.5.7.48.2") + OCSP = ObjectIdentifier("1.3.6.1.5.5.7.48.1") + + +class SubjectInformationAccessOID: + CA_REPOSITORY = ObjectIdentifier("1.3.6.1.5.5.7.48.5") + + +class CertificatePoliciesOID: + CPS_QUALIFIER = ObjectIdentifier("1.3.6.1.5.5.7.2.1") + CPS_USER_NOTICE = ObjectIdentifier("1.3.6.1.5.5.7.2.2") + ANY_POLICY = ObjectIdentifier("2.5.29.32.0") + + +class AttributeOID: + CHALLENGE_PASSWORD = ObjectIdentifier("1.2.840.113549.1.9.7") + UNSTRUCTURED_NAME = ObjectIdentifier("1.2.840.113549.1.9.2") + + +_OID_NAMES = { + NameOID.COMMON_NAME: "commonName", + NameOID.COUNTRY_NAME: "countryName", + NameOID.LOCALITY_NAME: "localityName", + NameOID.STATE_OR_PROVINCE_NAME: "stateOrProvinceName", + NameOID.STREET_ADDRESS: "streetAddress", + NameOID.ORGANIZATION_NAME: "organizationName", + NameOID.ORGANIZATIONAL_UNIT_NAME: "organizationalUnitName", + NameOID.SERIAL_NUMBER: "serialNumber", + NameOID.SURNAME: "surname", + NameOID.GIVEN_NAME: "givenName", + NameOID.TITLE: "title", + NameOID.GENERATION_QUALIFIER: "generationQualifier", + NameOID.X500_UNIQUE_IDENTIFIER: "x500UniqueIdentifier", + NameOID.DN_QUALIFIER: "dnQualifier", + NameOID.PSEUDONYM: "pseudonym", + NameOID.USER_ID: "userID", + NameOID.DOMAIN_COMPONENT: "domainComponent", + NameOID.EMAIL_ADDRESS: "emailAddress", + NameOID.JURISDICTION_COUNTRY_NAME: "jurisdictionCountryName", + NameOID.JURISDICTION_LOCALITY_NAME: "jurisdictionLocalityName", + NameOID.JURISDICTION_STATE_OR_PROVINCE_NAME: ( + "jurisdictionStateOrProvinceName" + ), + NameOID.BUSINESS_CATEGORY: "businessCategory", + NameOID.POSTAL_ADDRESS: "postalAddress", + NameOID.POSTAL_CODE: "postalCode", + NameOID.INN: "INN", + NameOID.OGRN: "OGRN", + NameOID.SNILS: "SNILS", + NameOID.UNSTRUCTURED_NAME: "unstructuredName", + SignatureAlgorithmOID.RSA_WITH_MD5: "md5WithRSAEncryption", + SignatureAlgorithmOID.RSA_WITH_SHA1: "sha1WithRSAEncryption", + SignatureAlgorithmOID.RSA_WITH_SHA224: "sha224WithRSAEncryption", + SignatureAlgorithmOID.RSA_WITH_SHA256: "sha256WithRSAEncryption", + SignatureAlgorithmOID.RSA_WITH_SHA384: "sha384WithRSAEncryption", + SignatureAlgorithmOID.RSA_WITH_SHA512: "sha512WithRSAEncryption", + SignatureAlgorithmOID.RSASSA_PSS: "RSASSA-PSS", + SignatureAlgorithmOID.ECDSA_WITH_SHA1: "ecdsa-with-SHA1", + SignatureAlgorithmOID.ECDSA_WITH_SHA224: "ecdsa-with-SHA224", + SignatureAlgorithmOID.ECDSA_WITH_SHA256: "ecdsa-with-SHA256", + SignatureAlgorithmOID.ECDSA_WITH_SHA384: "ecdsa-with-SHA384", + SignatureAlgorithmOID.ECDSA_WITH_SHA512: "ecdsa-with-SHA512", + SignatureAlgorithmOID.DSA_WITH_SHA1: "dsa-with-sha1", + SignatureAlgorithmOID.DSA_WITH_SHA224: "dsa-with-sha224", + SignatureAlgorithmOID.DSA_WITH_SHA256: "dsa-with-sha256", + SignatureAlgorithmOID.ED25519: "ed25519", + SignatureAlgorithmOID.ED448: "ed448", + SignatureAlgorithmOID.GOSTR3411_94_WITH_3410_2001: ( + "GOST R 34.11-94 with GOST R 34.10-2001" + ), + SignatureAlgorithmOID.GOSTR3410_2012_WITH_3411_2012_256: ( + "GOST R 34.10-2012 with GOST R 34.11-2012 (256 bit)" + ), + SignatureAlgorithmOID.GOSTR3410_2012_WITH_3411_2012_512: ( + "GOST R 34.10-2012 with GOST R 34.11-2012 (512 bit)" + ), + PublicKeyAlgorithmOID.DSA: "dsaEncryption", + PublicKeyAlgorithmOID.EC_PUBLIC_KEY: "id-ecPublicKey", + PublicKeyAlgorithmOID.RSAES_PKCS1_v1_5: "rsaEncryption", + PublicKeyAlgorithmOID.RSASSA_PSS: "rsassaPss", + PublicKeyAlgorithmOID.X25519: "X25519", + PublicKeyAlgorithmOID.X448: "X448", + ExtendedKeyUsageOID.SERVER_AUTH: "serverAuth", + ExtendedKeyUsageOID.CLIENT_AUTH: "clientAuth", + ExtendedKeyUsageOID.CODE_SIGNING: "codeSigning", + ExtendedKeyUsageOID.EMAIL_PROTECTION: "emailProtection", + ExtendedKeyUsageOID.TIME_STAMPING: "timeStamping", + ExtendedKeyUsageOID.OCSP_SIGNING: "OCSPSigning", + ExtendedKeyUsageOID.SMARTCARD_LOGON: "msSmartcardLogin", + ExtendedKeyUsageOID.KERBEROS_PKINIT_KDC: "pkInitKDC", + ExtensionOID.SUBJECT_DIRECTORY_ATTRIBUTES: "subjectDirectoryAttributes", + ExtensionOID.SUBJECT_KEY_IDENTIFIER: "subjectKeyIdentifier", + ExtensionOID.KEY_USAGE: "keyUsage", + ExtensionOID.SUBJECT_ALTERNATIVE_NAME: "subjectAltName", + ExtensionOID.ISSUER_ALTERNATIVE_NAME: "issuerAltName", + ExtensionOID.BASIC_CONSTRAINTS: "basicConstraints", + ExtensionOID.PRECERT_SIGNED_CERTIFICATE_TIMESTAMPS: ( + "signedCertificateTimestampList" + ), + ExtensionOID.SIGNED_CERTIFICATE_TIMESTAMPS: ( + "signedCertificateTimestampList" + ), + ExtensionOID.PRECERT_POISON: "ctPoison", + ExtensionOID.MS_CERTIFICATE_TEMPLATE: "msCertificateTemplate", + CRLEntryExtensionOID.CRL_REASON: "cRLReason", + CRLEntryExtensionOID.INVALIDITY_DATE: "invalidityDate", + CRLEntryExtensionOID.CERTIFICATE_ISSUER: "certificateIssuer", + ExtensionOID.NAME_CONSTRAINTS: "nameConstraints", + ExtensionOID.CRL_DISTRIBUTION_POINTS: "cRLDistributionPoints", + ExtensionOID.CERTIFICATE_POLICIES: "certificatePolicies", + ExtensionOID.POLICY_MAPPINGS: "policyMappings", + ExtensionOID.AUTHORITY_KEY_IDENTIFIER: "authorityKeyIdentifier", + ExtensionOID.POLICY_CONSTRAINTS: "policyConstraints", + ExtensionOID.EXTENDED_KEY_USAGE: "extendedKeyUsage", + ExtensionOID.FRESHEST_CRL: "freshestCRL", + ExtensionOID.INHIBIT_ANY_POLICY: "inhibitAnyPolicy", + ExtensionOID.ISSUING_DISTRIBUTION_POINT: "issuingDistributionPoint", + ExtensionOID.AUTHORITY_INFORMATION_ACCESS: "authorityInfoAccess", + ExtensionOID.SUBJECT_INFORMATION_ACCESS: "subjectInfoAccess", + ExtensionOID.OCSP_NO_CHECK: "OCSPNoCheck", + ExtensionOID.CRL_NUMBER: "cRLNumber", + ExtensionOID.DELTA_CRL_INDICATOR: "deltaCRLIndicator", + ExtensionOID.TLS_FEATURE: "TLSFeature", + AuthorityInformationAccessOID.OCSP: "OCSP", + AuthorityInformationAccessOID.CA_ISSUERS: "caIssuers", + SubjectInformationAccessOID.CA_REPOSITORY: "caRepository", + CertificatePoliciesOID.CPS_QUALIFIER: "id-qt-cps", + CertificatePoliciesOID.CPS_USER_NOTICE: "id-qt-unotice", + OCSPExtensionOID.NONCE: "OCSPNonce", + AttributeOID.CHALLENGE_PASSWORD: "challengePassword", +} diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/backends/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/backends/__init__.py new file mode 100644 index 00000000..b4400aa0 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/backends/__init__.py @@ -0,0 +1,13 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from typing import Any + + +def default_backend() -> Any: + from cryptography.hazmat.backends.openssl.backend import backend + + return backend diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/backends/openssl/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/backends/openssl/__init__.py new file mode 100644 index 00000000..51b04476 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/backends/openssl/__init__.py @@ -0,0 +1,9 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat.backends.openssl.backend import backend + +__all__ = ["backend"] diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/backends/openssl/backend.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/backends/openssl/backend.py new file mode 100644 index 00000000..c87d3e84 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/backends/openssl/backend.py @@ -0,0 +1,291 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.bindings.openssl import binding +from cryptography.hazmat.primitives import hashes +from cryptography.hazmat.primitives._asymmetric import AsymmetricPadding +from cryptography.hazmat.primitives.asymmetric import ec +from cryptography.hazmat.primitives.asymmetric import utils as asym_utils +from cryptography.hazmat.primitives.asymmetric.padding import ( + MGF1, + OAEP, + PSS, + PKCS1v15, +) +from cryptography.hazmat.primitives.ciphers import ( + CipherAlgorithm, +) +from cryptography.hazmat.primitives.ciphers.algorithms import ( + AES, +) +from cryptography.hazmat.primitives.ciphers.modes import ( + CBC, + Mode, +) + + +class Backend: + """ + OpenSSL API binding interfaces. + """ + + name = "openssl" + + # TripleDES encryption is disallowed/deprecated throughout 2023 in + # FIPS 140-3. To keep it simple we denylist any use of TripleDES (TDEA). + _fips_ciphers = (AES,) + # Sometimes SHA1 is still permissible. That logic is contained + # within the various *_supported methods. + _fips_hashes = ( + hashes.SHA224, + hashes.SHA256, + hashes.SHA384, + hashes.SHA512, + hashes.SHA512_224, + hashes.SHA512_256, + hashes.SHA3_224, + hashes.SHA3_256, + hashes.SHA3_384, + hashes.SHA3_512, + hashes.SHAKE128, + hashes.SHAKE256, + ) + _fips_ecdh_curves = ( + ec.SECP224R1, + ec.SECP256R1, + ec.SECP384R1, + ec.SECP521R1, + ) + _fips_rsa_min_key_size = 2048 + _fips_rsa_min_public_exponent = 65537 + _fips_dsa_min_modulus = 1 << 2048 + _fips_dh_min_key_size = 2048 + _fips_dh_min_modulus = 1 << _fips_dh_min_key_size + + def __init__(self) -> None: + self._binding = binding.Binding() + self._ffi = self._binding.ffi + self._lib = self._binding.lib + self._fips_enabled = rust_openssl.is_fips_enabled() + + def __repr__(self) -> str: + return ( + f"" + ) + + def openssl_assert(self, ok: bool) -> None: + return binding._openssl_assert(ok) + + def _enable_fips(self) -> None: + # This function enables FIPS mode for OpenSSL 3.0.0 on installs that + # have the FIPS provider installed properly. + rust_openssl.enable_fips(rust_openssl._providers) + assert rust_openssl.is_fips_enabled() + self._fips_enabled = rust_openssl.is_fips_enabled() + + def openssl_version_text(self) -> str: + """ + Friendly string name of the loaded OpenSSL library. This is not + necessarily the same version as it was compiled against. + + Example: OpenSSL 3.2.1 30 Jan 2024 + """ + return rust_openssl.openssl_version_text() + + def openssl_version_number(self) -> int: + return rust_openssl.openssl_version() + + def _evp_md_from_algorithm(self, algorithm: hashes.HashAlgorithm): + if algorithm.name in ("blake2b", "blake2s"): + alg = f"{algorithm.name}{algorithm.digest_size * 8}".encode( + "ascii" + ) + else: + alg = algorithm.name.encode("ascii") + + evp_md = self._lib.EVP_get_digestbyname(alg) + return evp_md + + def hash_supported(self, algorithm: hashes.HashAlgorithm) -> bool: + if self._fips_enabled and not isinstance(algorithm, self._fips_hashes): + return False + + evp_md = self._evp_md_from_algorithm(algorithm) + return evp_md != self._ffi.NULL + + def signature_hash_supported( + self, algorithm: hashes.HashAlgorithm + ) -> bool: + # Dedicated check for hashing algorithm use in message digest for + # signatures, e.g. RSA PKCS#1 v1.5 SHA1 (sha1WithRSAEncryption). + if self._fips_enabled and isinstance(algorithm, hashes.SHA1): + return False + return self.hash_supported(algorithm) + + def scrypt_supported(self) -> bool: + if self._fips_enabled: + return False + else: + return hasattr(rust_openssl.kdf, "derive_scrypt") + + def hmac_supported(self, algorithm: hashes.HashAlgorithm) -> bool: + # FIPS mode still allows SHA1 for HMAC + if self._fips_enabled and isinstance(algorithm, hashes.SHA1): + return True + + return self.hash_supported(algorithm) + + def cipher_supported(self, cipher: CipherAlgorithm, mode: Mode) -> bool: + if self._fips_enabled: + # FIPS mode requires AES. TripleDES is disallowed/deprecated in + # FIPS 140-3. + if not isinstance(cipher, self._fips_ciphers): + return False + + return rust_openssl.ciphers.cipher_supported(cipher, mode) + + def pbkdf2_hmac_supported(self, algorithm: hashes.HashAlgorithm) -> bool: + return self.hmac_supported(algorithm) + + def _consume_errors(self) -> list[rust_openssl.OpenSSLError]: + return rust_openssl.capture_error_stack() + + def _oaep_hash_supported(self, algorithm: hashes.HashAlgorithm) -> bool: + if self._fips_enabled and isinstance(algorithm, hashes.SHA1): + return False + + return isinstance( + algorithm, + ( + hashes.SHA1, + hashes.SHA224, + hashes.SHA256, + hashes.SHA384, + hashes.SHA512, + ), + ) + + def rsa_padding_supported(self, padding: AsymmetricPadding) -> bool: + if isinstance(padding, PKCS1v15): + return True + elif isinstance(padding, PSS) and isinstance(padding._mgf, MGF1): + # SHA1 is permissible in MGF1 in FIPS even when SHA1 is blocked + # as signature algorithm. + if self._fips_enabled and isinstance( + padding._mgf._algorithm, hashes.SHA1 + ): + return True + else: + return self.hash_supported(padding._mgf._algorithm) + elif isinstance(padding, OAEP) and isinstance(padding._mgf, MGF1): + return self._oaep_hash_supported( + padding._mgf._algorithm + ) and self._oaep_hash_supported(padding._algorithm) + else: + return False + + def rsa_encryption_supported(self, padding: AsymmetricPadding) -> bool: + if self._fips_enabled and isinstance(padding, PKCS1v15): + return False + else: + return self.rsa_padding_supported(padding) + + def dsa_supported(self) -> bool: + return ( + not rust_openssl.CRYPTOGRAPHY_IS_BORINGSSL + and not self._fips_enabled + ) + + def dsa_hash_supported(self, algorithm: hashes.HashAlgorithm) -> bool: + if not self.dsa_supported(): + return False + return self.signature_hash_supported(algorithm) + + def cmac_algorithm_supported(self, algorithm) -> bool: + return self.cipher_supported( + algorithm, CBC(b"\x00" * algorithm.block_size) + ) + + def elliptic_curve_supported(self, curve: ec.EllipticCurve) -> bool: + if self._fips_enabled and not isinstance( + curve, self._fips_ecdh_curves + ): + return False + + return rust_openssl.ec.curve_supported(curve) + + def elliptic_curve_signature_algorithm_supported( + self, + signature_algorithm: ec.EllipticCurveSignatureAlgorithm, + curve: ec.EllipticCurve, + ) -> bool: + # We only support ECDSA right now. + if not isinstance(signature_algorithm, ec.ECDSA): + return False + + return self.elliptic_curve_supported(curve) and ( + isinstance(signature_algorithm.algorithm, asym_utils.Prehashed) + or self.hash_supported(signature_algorithm.algorithm) + ) + + def elliptic_curve_exchange_algorithm_supported( + self, algorithm: ec.ECDH, curve: ec.EllipticCurve + ) -> bool: + return self.elliptic_curve_supported(curve) and isinstance( + algorithm, ec.ECDH + ) + + def dh_supported(self) -> bool: + return not rust_openssl.CRYPTOGRAPHY_IS_BORINGSSL + + def dh_x942_serialization_supported(self) -> bool: + return self._lib.Cryptography_HAS_EVP_PKEY_DHX == 1 + + def x25519_supported(self) -> bool: + if self._fips_enabled: + return False + return True + + def x448_supported(self) -> bool: + if self._fips_enabled: + return False + return ( + not rust_openssl.CRYPTOGRAPHY_IS_LIBRESSL + and not rust_openssl.CRYPTOGRAPHY_IS_BORINGSSL + ) + + def ed25519_supported(self) -> bool: + if self._fips_enabled: + return False + return True + + def ed448_supported(self) -> bool: + if self._fips_enabled: + return False + return ( + not rust_openssl.CRYPTOGRAPHY_IS_LIBRESSL + and not rust_openssl.CRYPTOGRAPHY_IS_BORINGSSL + ) + + def ecdsa_deterministic_supported(self) -> bool: + return ( + rust_openssl.CRYPTOGRAPHY_OPENSSL_320_OR_GREATER + and not self._fips_enabled + ) + + def poly1305_supported(self) -> bool: + if self._fips_enabled: + return False + return True + + def pkcs7_supported(self) -> bool: + return not rust_openssl.CRYPTOGRAPHY_IS_BORINGSSL + + +backend = Backend() diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/__init__.py new file mode 100644 index 00000000..b5093362 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/__init__.py @@ -0,0 +1,3 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust.pyd b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust.pyd new file mode 100644 index 00000000..1b2595d2 Binary files /dev/null and b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust.pyd differ diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/__init__.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/__init__.pyi new file mode 100644 index 00000000..c0ea0a54 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/__init__.pyi @@ -0,0 +1,24 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives import padding + +def check_pkcs7_padding(data: bytes) -> bool: ... +def check_ansix923_padding(data: bytes) -> bool: ... + +class PKCS7PaddingContext(padding.PaddingContext): + def __init__(self, block_size: int) -> None: ... + def update(self, data: bytes) -> bytes: ... + def finalize(self) -> bytes: ... + +class ObjectIdentifier: + def __init__(self, val: str) -> None: ... + @property + def dotted_string(self) -> str: ... + @property + def _name(self) -> str: ... + +T = typing.TypeVar("T") diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/_openssl.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/_openssl.pyi new file mode 100644 index 00000000..80100082 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/_openssl.pyi @@ -0,0 +1,8 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +lib = typing.Any +ffi = typing.Any diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/asn1.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/asn1.pyi new file mode 100644 index 00000000..3b5f208e --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/asn1.pyi @@ -0,0 +1,7 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +def decode_dss_signature(signature: bytes) -> tuple[int, int]: ... +def encode_dss_signature(r: int, s: int) -> bytes: ... +def parse_spki_for_data(data: bytes) -> bytes: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/exceptions.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/exceptions.pyi new file mode 100644 index 00000000..09f46b1e --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/exceptions.pyi @@ -0,0 +1,17 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +class _Reasons: + BACKEND_MISSING_INTERFACE: _Reasons + UNSUPPORTED_HASH: _Reasons + UNSUPPORTED_CIPHER: _Reasons + UNSUPPORTED_PADDING: _Reasons + UNSUPPORTED_MGF: _Reasons + UNSUPPORTED_PUBLIC_KEY_ALGORITHM: _Reasons + UNSUPPORTED_ELLIPTIC_CURVE: _Reasons + UNSUPPORTED_SERIALIZATION: _Reasons + UNSUPPORTED_X509: _Reasons + UNSUPPORTED_EXCHANGE_ALGORITHM: _Reasons + UNSUPPORTED_DIFFIE_HELLMAN: _Reasons + UNSUPPORTED_MAC: _Reasons diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/ocsp.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/ocsp.pyi new file mode 100644 index 00000000..5e02145d --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/ocsp.pyi @@ -0,0 +1,23 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from cryptography.hazmat.primitives import hashes +from cryptography.hazmat.primitives.asymmetric.types import PrivateKeyTypes +from cryptography.x509 import ocsp + +class OCSPRequest: ... +class OCSPResponse: ... +class OCSPSingleResponse: ... + +def load_der_ocsp_request(data: bytes) -> ocsp.OCSPRequest: ... +def load_der_ocsp_response(data: bytes) -> ocsp.OCSPResponse: ... +def create_ocsp_request( + builder: ocsp.OCSPRequestBuilder, +) -> ocsp.OCSPRequest: ... +def create_ocsp_response( + status: ocsp.OCSPResponseStatus, + builder: ocsp.OCSPResponseBuilder | None, + private_key: PrivateKeyTypes | None, + hash_algorithm: hashes.HashAlgorithm | None, +) -> ocsp.OCSPResponse: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/__init__.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/__init__.pyi new file mode 100644 index 00000000..1e66d333 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/__init__.pyi @@ -0,0 +1,71 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.bindings._rust.openssl import ( + aead, + ciphers, + cmac, + dh, + dsa, + ec, + ed448, + ed25519, + hashes, + hmac, + kdf, + keys, + poly1305, + rsa, + x448, + x25519, +) + +__all__ = [ + "aead", + "ciphers", + "cmac", + "dh", + "dsa", + "ec", + "ed448", + "ed25519", + "hashes", + "hmac", + "kdf", + "keys", + "openssl_version", + "openssl_version_text", + "poly1305", + "raise_openssl_error", + "rsa", + "x448", + "x25519", +] + +CRYPTOGRAPHY_IS_LIBRESSL: bool +CRYPTOGRAPHY_IS_BORINGSSL: bool +CRYPTOGRAPHY_OPENSSL_300_OR_GREATER: bool +CRYPTOGRAPHY_OPENSSL_320_OR_GREATER: bool + +class Providers: ... + +_legacy_provider_loaded: bool +_providers: Providers + +def openssl_version() -> int: ... +def openssl_version_text() -> str: ... +def raise_openssl_error() -> typing.NoReturn: ... +def capture_error_stack() -> list[OpenSSLError]: ... +def is_fips_enabled() -> bool: ... +def enable_fips(providers: Providers) -> None: ... + +class OpenSSLError: + @property + def lib(self) -> int: ... + @property + def reason(self) -> int: ... + @property + def reason_text(self) -> bytes: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/aead.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/aead.pyi new file mode 100644 index 00000000..047f49d8 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/aead.pyi @@ -0,0 +1,103 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +class AESGCM: + def __init__(self, key: bytes) -> None: ... + @staticmethod + def generate_key(key_size: int) -> bytes: ... + def encrypt( + self, + nonce: bytes, + data: bytes, + associated_data: bytes | None, + ) -> bytes: ... + def decrypt( + self, + nonce: bytes, + data: bytes, + associated_data: bytes | None, + ) -> bytes: ... + +class ChaCha20Poly1305: + def __init__(self, key: bytes) -> None: ... + @staticmethod + def generate_key() -> bytes: ... + def encrypt( + self, + nonce: bytes, + data: bytes, + associated_data: bytes | None, + ) -> bytes: ... + def decrypt( + self, + nonce: bytes, + data: bytes, + associated_data: bytes | None, + ) -> bytes: ... + +class AESCCM: + def __init__(self, key: bytes, tag_length: int = 16) -> None: ... + @staticmethod + def generate_key(key_size: int) -> bytes: ... + def encrypt( + self, + nonce: bytes, + data: bytes, + associated_data: bytes | None, + ) -> bytes: ... + def decrypt( + self, + nonce: bytes, + data: bytes, + associated_data: bytes | None, + ) -> bytes: ... + +class AESSIV: + def __init__(self, key: bytes) -> None: ... + @staticmethod + def generate_key(key_size: int) -> bytes: ... + def encrypt( + self, + data: bytes, + associated_data: list[bytes] | None, + ) -> bytes: ... + def decrypt( + self, + data: bytes, + associated_data: list[bytes] | None, + ) -> bytes: ... + +class AESOCB3: + def __init__(self, key: bytes) -> None: ... + @staticmethod + def generate_key(key_size: int) -> bytes: ... + def encrypt( + self, + nonce: bytes, + data: bytes, + associated_data: bytes | None, + ) -> bytes: ... + def decrypt( + self, + nonce: bytes, + data: bytes, + associated_data: bytes | None, + ) -> bytes: ... + +class AESGCMSIV: + def __init__(self, key: bytes) -> None: ... + @staticmethod + def generate_key(key_size: int) -> bytes: ... + def encrypt( + self, + nonce: bytes, + data: bytes, + associated_data: bytes | None, + ) -> bytes: ... + def decrypt( + self, + nonce: bytes, + data: bytes, + associated_data: bytes | None, + ) -> bytes: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ciphers.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ciphers.pyi new file mode 100644 index 00000000..759f3b59 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ciphers.pyi @@ -0,0 +1,38 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives import ciphers +from cryptography.hazmat.primitives.ciphers import modes + +@typing.overload +def create_encryption_ctx( + algorithm: ciphers.CipherAlgorithm, mode: modes.ModeWithAuthenticationTag +) -> ciphers.AEADEncryptionContext: ... +@typing.overload +def create_encryption_ctx( + algorithm: ciphers.CipherAlgorithm, mode: modes.Mode +) -> ciphers.CipherContext: ... +@typing.overload +def create_decryption_ctx( + algorithm: ciphers.CipherAlgorithm, mode: modes.ModeWithAuthenticationTag +) -> ciphers.AEADDecryptionContext: ... +@typing.overload +def create_decryption_ctx( + algorithm: ciphers.CipherAlgorithm, mode: modes.Mode +) -> ciphers.CipherContext: ... +def cipher_supported( + algorithm: ciphers.CipherAlgorithm, mode: modes.Mode +) -> bool: ... +def _advance( + ctx: ciphers.AEADEncryptionContext | ciphers.AEADDecryptionContext, n: int +) -> None: ... +def _advance_aad( + ctx: ciphers.AEADEncryptionContext | ciphers.AEADDecryptionContext, n: int +) -> None: ... + +class CipherContext: ... +class AEADEncryptionContext: ... +class AEADDecryptionContext: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/cmac.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/cmac.pyi new file mode 100644 index 00000000..9c03508b --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/cmac.pyi @@ -0,0 +1,18 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives import ciphers + +class CMAC: + def __init__( + self, + algorithm: ciphers.BlockCipherAlgorithm, + backend: typing.Any = None, + ) -> None: ... + def update(self, data: bytes) -> None: ... + def finalize(self) -> bytes: ... + def verify(self, signature: bytes) -> None: ... + def copy(self) -> CMAC: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/dh.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/dh.pyi new file mode 100644 index 00000000..08733d74 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/dh.pyi @@ -0,0 +1,51 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives.asymmetric import dh + +MIN_MODULUS_SIZE: int + +class DHPrivateKey: ... +class DHPublicKey: ... +class DHParameters: ... + +class DHPrivateNumbers: + def __init__(self, x: int, public_numbers: DHPublicNumbers) -> None: ... + def private_key(self, backend: typing.Any = None) -> dh.DHPrivateKey: ... + @property + def x(self) -> int: ... + @property + def public_numbers(self) -> DHPublicNumbers: ... + +class DHPublicNumbers: + def __init__( + self, y: int, parameter_numbers: DHParameterNumbers + ) -> None: ... + def public_key(self, backend: typing.Any = None) -> dh.DHPublicKey: ... + @property + def y(self) -> int: ... + @property + def parameter_numbers(self) -> DHParameterNumbers: ... + +class DHParameterNumbers: + def __init__(self, p: int, g: int, q: int | None = None) -> None: ... + def parameters(self, backend: typing.Any = None) -> dh.DHParameters: ... + @property + def p(self) -> int: ... + @property + def g(self) -> int: ... + @property + def q(self) -> int | None: ... + +def generate_parameters( + generator: int, key_size: int, backend: typing.Any = None +) -> dh.DHParameters: ... +def from_pem_parameters( + data: bytes, backend: typing.Any = None +) -> dh.DHParameters: ... +def from_der_parameters( + data: bytes, backend: typing.Any = None +) -> dh.DHParameters: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/dsa.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/dsa.pyi new file mode 100644 index 00000000..0922a4c4 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/dsa.pyi @@ -0,0 +1,41 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives.asymmetric import dsa + +class DSAPrivateKey: ... +class DSAPublicKey: ... +class DSAParameters: ... + +class DSAPrivateNumbers: + def __init__(self, x: int, public_numbers: DSAPublicNumbers) -> None: ... + @property + def x(self) -> int: ... + @property + def public_numbers(self) -> DSAPublicNumbers: ... + def private_key(self, backend: typing.Any = None) -> dsa.DSAPrivateKey: ... + +class DSAPublicNumbers: + def __init__( + self, y: int, parameter_numbers: DSAParameterNumbers + ) -> None: ... + @property + def y(self) -> int: ... + @property + def parameter_numbers(self) -> DSAParameterNumbers: ... + def public_key(self, backend: typing.Any = None) -> dsa.DSAPublicKey: ... + +class DSAParameterNumbers: + def __init__(self, p: int, q: int, g: int) -> None: ... + @property + def p(self) -> int: ... + @property + def q(self) -> int: ... + @property + def g(self) -> int: ... + def parameters(self, backend: typing.Any = None) -> dsa.DSAParameters: ... + +def generate_parameters(key_size: int) -> dsa.DSAParameters: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ec.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ec.pyi new file mode 100644 index 00000000..5c3b7bf6 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ec.pyi @@ -0,0 +1,52 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives.asymmetric import ec + +class ECPrivateKey: ... +class ECPublicKey: ... + +class EllipticCurvePrivateNumbers: + def __init__( + self, private_value: int, public_numbers: EllipticCurvePublicNumbers + ) -> None: ... + def private_key( + self, backend: typing.Any = None + ) -> ec.EllipticCurvePrivateKey: ... + @property + def private_value(self) -> int: ... + @property + def public_numbers(self) -> EllipticCurvePublicNumbers: ... + +class EllipticCurvePublicNumbers: + def __init__(self, x: int, y: int, curve: ec.EllipticCurve) -> None: ... + def public_key( + self, backend: typing.Any = None + ) -> ec.EllipticCurvePublicKey: ... + @property + def x(self) -> int: ... + @property + def y(self) -> int: ... + @property + def curve(self) -> ec.EllipticCurve: ... + def __eq__(self, other: object) -> bool: ... + +def curve_supported(curve: ec.EllipticCurve) -> bool: ... +def generate_private_key( + curve: ec.EllipticCurve, backend: typing.Any = None +) -> ec.EllipticCurvePrivateKey: ... +def from_private_numbers( + numbers: ec.EllipticCurvePrivateNumbers, +) -> ec.EllipticCurvePrivateKey: ... +def from_public_numbers( + numbers: ec.EllipticCurvePublicNumbers, +) -> ec.EllipticCurvePublicKey: ... +def from_public_bytes( + curve: ec.EllipticCurve, data: bytes +) -> ec.EllipticCurvePublicKey: ... +def derive_private_key( + private_value: int, curve: ec.EllipticCurve +) -> ec.EllipticCurvePrivateKey: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ed25519.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ed25519.pyi new file mode 100644 index 00000000..5233f9a1 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ed25519.pyi @@ -0,0 +1,12 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from cryptography.hazmat.primitives.asymmetric import ed25519 + +class Ed25519PrivateKey: ... +class Ed25519PublicKey: ... + +def generate_key() -> ed25519.Ed25519PrivateKey: ... +def from_private_bytes(data: bytes) -> ed25519.Ed25519PrivateKey: ... +def from_public_bytes(data: bytes) -> ed25519.Ed25519PublicKey: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ed448.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ed448.pyi new file mode 100644 index 00000000..7a065203 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/ed448.pyi @@ -0,0 +1,12 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from cryptography.hazmat.primitives.asymmetric import ed448 + +class Ed448PrivateKey: ... +class Ed448PublicKey: ... + +def generate_key() -> ed448.Ed448PrivateKey: ... +def from_private_bytes(data: bytes) -> ed448.Ed448PrivateKey: ... +def from_public_bytes(data: bytes) -> ed448.Ed448PublicKey: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/hashes.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/hashes.pyi new file mode 100644 index 00000000..ca5f42a0 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/hashes.pyi @@ -0,0 +1,17 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives import hashes + +class Hash(hashes.HashContext): + def __init__( + self, algorithm: hashes.HashAlgorithm, backend: typing.Any = None + ) -> None: ... + @property + def algorithm(self) -> hashes.HashAlgorithm: ... + def update(self, data: bytes) -> None: ... + def finalize(self) -> bytes: ... + def copy(self) -> Hash: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/hmac.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/hmac.pyi new file mode 100644 index 00000000..e38d9b54 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/hmac.pyi @@ -0,0 +1,21 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives import hashes + +class HMAC(hashes.HashContext): + def __init__( + self, + key: bytes, + algorithm: hashes.HashAlgorithm, + backend: typing.Any = None, + ) -> None: ... + @property + def algorithm(self) -> hashes.HashAlgorithm: ... + def update(self, data: bytes) -> None: ... + def finalize(self) -> bytes: ... + def verify(self, signature: bytes) -> None: ... + def copy(self) -> HMAC: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/kdf.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/kdf.pyi new file mode 100644 index 00000000..034a8fed --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/kdf.pyi @@ -0,0 +1,22 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from cryptography.hazmat.primitives.hashes import HashAlgorithm + +def derive_pbkdf2_hmac( + key_material: bytes, + algorithm: HashAlgorithm, + salt: bytes, + iterations: int, + length: int, +) -> bytes: ... +def derive_scrypt( + key_material: bytes, + salt: bytes, + n: int, + r: int, + p: int, + max_mem: int, + length: int, +) -> bytes: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/keys.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/keys.pyi new file mode 100644 index 00000000..6815b7d9 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/keys.pyi @@ -0,0 +1,33 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives.asymmetric.types import ( + PrivateKeyTypes, + PublicKeyTypes, +) + +def load_der_private_key( + data: bytes, + password: bytes | None, + backend: typing.Any = None, + *, + unsafe_skip_rsa_key_validation: bool = False, +) -> PrivateKeyTypes: ... +def load_pem_private_key( + data: bytes, + password: bytes | None, + backend: typing.Any = None, + *, + unsafe_skip_rsa_key_validation: bool = False, +) -> PrivateKeyTypes: ... +def load_der_public_key( + data: bytes, + backend: typing.Any = None, +) -> PublicKeyTypes: ... +def load_pem_public_key( + data: bytes, + backend: typing.Any = None, +) -> PublicKeyTypes: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/poly1305.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/poly1305.pyi new file mode 100644 index 00000000..2e9b0a9e --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/poly1305.pyi @@ -0,0 +1,13 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +class Poly1305: + def __init__(self, key: bytes) -> None: ... + @staticmethod + def generate_tag(key: bytes, data: bytes) -> bytes: ... + @staticmethod + def verify_tag(key: bytes, data: bytes, tag: bytes) -> None: ... + def update(self, data: bytes) -> None: ... + def finalize(self) -> bytes: ... + def verify(self, tag: bytes) -> None: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/rsa.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/rsa.pyi new file mode 100644 index 00000000..ef7752dd --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/rsa.pyi @@ -0,0 +1,55 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography.hazmat.primitives.asymmetric import rsa + +class RSAPrivateKey: ... +class RSAPublicKey: ... + +class RSAPrivateNumbers: + def __init__( + self, + p: int, + q: int, + d: int, + dmp1: int, + dmq1: int, + iqmp: int, + public_numbers: RSAPublicNumbers, + ) -> None: ... + @property + def p(self) -> int: ... + @property + def q(self) -> int: ... + @property + def d(self) -> int: ... + @property + def dmp1(self) -> int: ... + @property + def dmq1(self) -> int: ... + @property + def iqmp(self) -> int: ... + @property + def public_numbers(self) -> RSAPublicNumbers: ... + def private_key( + self, + backend: typing.Any = None, + *, + unsafe_skip_rsa_key_validation: bool = False, + ) -> rsa.RSAPrivateKey: ... + +class RSAPublicNumbers: + def __init__(self, e: int, n: int) -> None: ... + @property + def n(self) -> int: ... + @property + def e(self) -> int: ... + def public_key(self, backend: typing.Any = None) -> rsa.RSAPublicKey: ... + +def generate_private_key( + public_exponent: int, + key_size: int, +) -> rsa.RSAPrivateKey: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/x25519.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/x25519.pyi new file mode 100644 index 00000000..da0f3ec5 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/x25519.pyi @@ -0,0 +1,12 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from cryptography.hazmat.primitives.asymmetric import x25519 + +class X25519PrivateKey: ... +class X25519PublicKey: ... + +def generate_key() -> x25519.X25519PrivateKey: ... +def from_private_bytes(data: bytes) -> x25519.X25519PrivateKey: ... +def from_public_bytes(data: bytes) -> x25519.X25519PublicKey: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/x448.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/x448.pyi new file mode 100644 index 00000000..e51cfebe --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/openssl/x448.pyi @@ -0,0 +1,12 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from cryptography.hazmat.primitives.asymmetric import x448 + +class X448PrivateKey: ... +class X448PublicKey: ... + +def generate_key() -> x448.X448PrivateKey: ... +def from_private_bytes(data: bytes) -> x448.X448PrivateKey: ... +def from_public_bytes(data: bytes) -> x448.X448PublicKey: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/pkcs12.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/pkcs12.pyi new file mode 100644 index 00000000..40514c46 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/pkcs12.pyi @@ -0,0 +1,46 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography import x509 +from cryptography.hazmat.primitives.asymmetric.types import PrivateKeyTypes +from cryptography.hazmat.primitives.serialization import ( + KeySerializationEncryption, +) +from cryptography.hazmat.primitives.serialization.pkcs12 import ( + PKCS12KeyAndCertificates, + PKCS12PrivateKeyTypes, +) + +class PKCS12Certificate: + def __init__( + self, cert: x509.Certificate, friendly_name: bytes | None + ) -> None: ... + @property + def friendly_name(self) -> bytes | None: ... + @property + def certificate(self) -> x509.Certificate: ... + +def load_key_and_certificates( + data: bytes, + password: bytes | None, + backend: typing.Any = None, +) -> tuple[ + PrivateKeyTypes | None, + x509.Certificate | None, + list[x509.Certificate], +]: ... +def load_pkcs12( + data: bytes, + password: bytes | None, + backend: typing.Any = None, +) -> PKCS12KeyAndCertificates: ... +def serialize_key_and_certificates( + name: bytes | None, + key: PKCS12PrivateKeyTypes | None, + cert: x509.Certificate | None, + cas: typing.Iterable[x509.Certificate | PKCS12Certificate] | None, + encryption_algorithm: KeySerializationEncryption, +) -> bytes: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/pkcs7.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/pkcs7.pyi new file mode 100644 index 00000000..a72120a7 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/pkcs7.pyi @@ -0,0 +1,30 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import typing + +from cryptography import x509 +from cryptography.hazmat.primitives import serialization +from cryptography.hazmat.primitives.serialization import pkcs7 + +def serialize_certificates( + certs: list[x509.Certificate], + encoding: serialization.Encoding, +) -> bytes: ... +def encrypt_and_serialize( + builder: pkcs7.PKCS7EnvelopeBuilder, + encoding: serialization.Encoding, + options: typing.Iterable[pkcs7.PKCS7Options], +) -> bytes: ... +def sign_and_serialize( + builder: pkcs7.PKCS7SignatureBuilder, + encoding: serialization.Encoding, + options: typing.Iterable[pkcs7.PKCS7Options], +) -> bytes: ... +def load_pem_pkcs7_certificates( + data: bytes, +) -> list[x509.Certificate]: ... +def load_der_pkcs7_certificates( + data: bytes, +) -> list[x509.Certificate]: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/test_support.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/test_support.pyi new file mode 100644 index 00000000..a53ee25d --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/test_support.pyi @@ -0,0 +1,29 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from cryptography import x509 +from cryptography.hazmat.primitives import serialization +from cryptography.hazmat.primitives.serialization import pkcs7 + +class TestCertificate: + not_after_tag: int + not_before_tag: int + issuer_value_tags: list[int] + subject_value_tags: list[int] + +def test_parse_certificate(data: bytes) -> TestCertificate: ... +def pkcs7_decrypt( + encoding: serialization.Encoding, + msg: bytes, + pkey: serialization.pkcs7.PKCS7PrivateKeyTypes, + cert_recipient: x509.Certificate, + options: list[pkcs7.PKCS7Options], +) -> bytes: ... +def pkcs7_verify( + encoding: serialization.Encoding, + sig: bytes, + msg: bytes | None, + certs: list[x509.Certificate], + options: list[pkcs7.PKCS7Options], +) -> None: ... diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/x509.pyi b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/x509.pyi new file mode 100644 index 00000000..aa85657f --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/_rust/x509.pyi @@ -0,0 +1,108 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +import datetime +import typing + +from cryptography import x509 +from cryptography.hazmat.primitives import hashes +from cryptography.hazmat.primitives.asymmetric.padding import PSS, PKCS1v15 +from cryptography.hazmat.primitives.asymmetric.types import PrivateKeyTypes + +def load_pem_x509_certificate( + data: bytes, backend: typing.Any = None +) -> x509.Certificate: ... +def load_der_x509_certificate( + data: bytes, backend: typing.Any = None +) -> x509.Certificate: ... +def load_pem_x509_certificates( + data: bytes, +) -> list[x509.Certificate]: ... +def load_pem_x509_crl( + data: bytes, backend: typing.Any = None +) -> x509.CertificateRevocationList: ... +def load_der_x509_crl( + data: bytes, backend: typing.Any = None +) -> x509.CertificateRevocationList: ... +def load_pem_x509_csr( + data: bytes, backend: typing.Any = None +) -> x509.CertificateSigningRequest: ... +def load_der_x509_csr( + data: bytes, backend: typing.Any = None +) -> x509.CertificateSigningRequest: ... +def encode_name_bytes(name: x509.Name) -> bytes: ... +def encode_extension_value(extension: x509.ExtensionType) -> bytes: ... +def create_x509_certificate( + builder: x509.CertificateBuilder, + private_key: PrivateKeyTypes, + hash_algorithm: hashes.HashAlgorithm | None, + rsa_padding: PKCS1v15 | PSS | None, +) -> x509.Certificate: ... +def create_x509_csr( + builder: x509.CertificateSigningRequestBuilder, + private_key: PrivateKeyTypes, + hash_algorithm: hashes.HashAlgorithm | None, + rsa_padding: PKCS1v15 | PSS | None, +) -> x509.CertificateSigningRequest: ... +def create_x509_crl( + builder: x509.CertificateRevocationListBuilder, + private_key: PrivateKeyTypes, + hash_algorithm: hashes.HashAlgorithm | None, + rsa_padding: PKCS1v15 | PSS | None, +) -> x509.CertificateRevocationList: ... + +class Sct: ... +class Certificate: ... +class RevokedCertificate: ... +class CertificateRevocationList: ... +class CertificateSigningRequest: ... + +class PolicyBuilder: + def time(self, new_time: datetime.datetime) -> PolicyBuilder: ... + def store(self, new_store: Store) -> PolicyBuilder: ... + def max_chain_depth(self, new_max_chain_depth: int) -> PolicyBuilder: ... + def build_client_verifier(self) -> ClientVerifier: ... + def build_server_verifier( + self, subject: x509.verification.Subject + ) -> ServerVerifier: ... + +class VerifiedClient: + @property + def subjects(self) -> list[x509.GeneralName]: ... + @property + def chain(self) -> list[x509.Certificate]: ... + +class ClientVerifier: + @property + def validation_time(self) -> datetime.datetime: ... + @property + def store(self) -> Store: ... + @property + def max_chain_depth(self) -> int: ... + def verify( + self, + leaf: x509.Certificate, + intermediates: list[x509.Certificate], + ) -> VerifiedClient: ... + +class ServerVerifier: + @property + def subject(self) -> x509.verification.Subject: ... + @property + def validation_time(self) -> datetime.datetime: ... + @property + def store(self) -> Store: ... + @property + def max_chain_depth(self) -> int: ... + def verify( + self, + leaf: x509.Certificate, + intermediates: list[x509.Certificate], + ) -> list[x509.Certificate]: ... + +class Store: + def __init__(self, certs: list[x509.Certificate]) -> None: ... + +class VerificationError(Exception): + pass diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/openssl/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/openssl/__init__.py new file mode 100644 index 00000000..b5093362 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/openssl/__init__.py @@ -0,0 +1,3 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/openssl/_conditional.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/openssl/_conditional.py new file mode 100644 index 00000000..73c06f7d --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/openssl/_conditional.py @@ -0,0 +1,183 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + + +def cryptography_has_set_cert_cb() -> list[str]: + return [ + "SSL_CTX_set_cert_cb", + "SSL_set_cert_cb", + ] + + +def cryptography_has_ssl_st() -> list[str]: + return [ + "SSL_ST_BEFORE", + "SSL_ST_OK", + "SSL_ST_INIT", + "SSL_ST_RENEGOTIATE", + ] + + +def cryptography_has_tls_st() -> list[str]: + return [ + "TLS_ST_BEFORE", + "TLS_ST_OK", + ] + + +def cryptography_has_ssl_sigalgs() -> list[str]: + return [ + "SSL_CTX_set1_sigalgs_list", + ] + + +def cryptography_has_psk() -> list[str]: + return [ + "SSL_CTX_use_psk_identity_hint", + "SSL_CTX_set_psk_server_callback", + "SSL_CTX_set_psk_client_callback", + ] + + +def cryptography_has_psk_tlsv13() -> list[str]: + return [ + "SSL_CTX_set_psk_find_session_callback", + "SSL_CTX_set_psk_use_session_callback", + "Cryptography_SSL_SESSION_new", + "SSL_CIPHER_find", + "SSL_SESSION_set1_master_key", + "SSL_SESSION_set_cipher", + "SSL_SESSION_set_protocol_version", + ] + + +def cryptography_has_custom_ext() -> list[str]: + return [ + "SSL_CTX_add_client_custom_ext", + "SSL_CTX_add_server_custom_ext", + "SSL_extension_supported", + ] + + +def cryptography_has_tlsv13_functions() -> list[str]: + return [ + "SSL_VERIFY_POST_HANDSHAKE", + "SSL_CTX_set_ciphersuites", + "SSL_verify_client_post_handshake", + "SSL_CTX_set_post_handshake_auth", + "SSL_set_post_handshake_auth", + "SSL_SESSION_get_max_early_data", + "SSL_write_early_data", + "SSL_read_early_data", + "SSL_CTX_set_max_early_data", + ] + + +def cryptography_has_engine() -> list[str]: + return [ + "ENGINE_by_id", + "ENGINE_init", + "ENGINE_finish", + "ENGINE_get_default_RAND", + "ENGINE_set_default_RAND", + "ENGINE_unregister_RAND", + "ENGINE_ctrl_cmd", + "ENGINE_free", + "ENGINE_get_name", + "ENGINE_ctrl_cmd_string", + "ENGINE_load_builtin_engines", + "ENGINE_load_private_key", + "ENGINE_load_public_key", + "SSL_CTX_set_client_cert_engine", + ] + + +def cryptography_has_verified_chain() -> list[str]: + return [ + "SSL_get0_verified_chain", + ] + + +def cryptography_has_srtp() -> list[str]: + return [ + "SSL_CTX_set_tlsext_use_srtp", + "SSL_set_tlsext_use_srtp", + "SSL_get_selected_srtp_profile", + ] + + +def cryptography_has_op_no_renegotiation() -> list[str]: + return [ + "SSL_OP_NO_RENEGOTIATION", + ] + + +def cryptography_has_dtls_get_data_mtu() -> list[str]: + return [ + "DTLS_get_data_mtu", + ] + + +def cryptography_has_ssl_cookie() -> list[str]: + return [ + "SSL_OP_COOKIE_EXCHANGE", + "DTLSv1_listen", + "SSL_CTX_set_cookie_generate_cb", + "SSL_CTX_set_cookie_verify_cb", + ] + + +def cryptography_has_prime_checks() -> list[str]: + return [ + "BN_prime_checks_for_size", + ] + + +def cryptography_has_unexpected_eof_while_reading() -> list[str]: + return ["SSL_R_UNEXPECTED_EOF_WHILE_READING"] + + +def cryptography_has_ssl_op_ignore_unexpected_eof() -> list[str]: + return [ + "SSL_OP_IGNORE_UNEXPECTED_EOF", + ] + + +def cryptography_has_get_extms_support() -> list[str]: + return ["SSL_get_extms_support"] + + +# This is a mapping of +# {condition: function-returning-names-dependent-on-that-condition} so we can +# loop over them and delete unsupported names at runtime. It will be removed +# when cffi supports #if in cdef. We use functions instead of just a dict of +# lists so we can use coverage to measure which are used. +CONDITIONAL_NAMES = { + "Cryptography_HAS_SET_CERT_CB": cryptography_has_set_cert_cb, + "Cryptography_HAS_SSL_ST": cryptography_has_ssl_st, + "Cryptography_HAS_TLS_ST": cryptography_has_tls_st, + "Cryptography_HAS_SIGALGS": cryptography_has_ssl_sigalgs, + "Cryptography_HAS_PSK": cryptography_has_psk, + "Cryptography_HAS_PSK_TLSv1_3": cryptography_has_psk_tlsv13, + "Cryptography_HAS_CUSTOM_EXT": cryptography_has_custom_ext, + "Cryptography_HAS_TLSv1_3_FUNCTIONS": cryptography_has_tlsv13_functions, + "Cryptography_HAS_ENGINE": cryptography_has_engine, + "Cryptography_HAS_VERIFIED_CHAIN": cryptography_has_verified_chain, + "Cryptography_HAS_SRTP": cryptography_has_srtp, + "Cryptography_HAS_OP_NO_RENEGOTIATION": ( + cryptography_has_op_no_renegotiation + ), + "Cryptography_HAS_DTLS_GET_DATA_MTU": cryptography_has_dtls_get_data_mtu, + "Cryptography_HAS_SSL_COOKIE": cryptography_has_ssl_cookie, + "Cryptography_HAS_PRIME_CHECKS": cryptography_has_prime_checks, + "Cryptography_HAS_UNEXPECTED_EOF_WHILE_READING": ( + cryptography_has_unexpected_eof_while_reading + ), + "Cryptography_HAS_SSL_OP_IGNORE_UNEXPECTED_EOF": ( + cryptography_has_ssl_op_ignore_unexpected_eof + ), + "Cryptography_HAS_GET_EXTMS_SUPPORT": cryptography_has_get_extms_support, +} diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/openssl/binding.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/openssl/binding.py new file mode 100644 index 00000000..d4dfeef4 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/bindings/openssl/binding.py @@ -0,0 +1,121 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import os +import sys +import threading +import types +import typing +import warnings + +import cryptography +from cryptography.exceptions import InternalError +from cryptography.hazmat.bindings._rust import _openssl, openssl +from cryptography.hazmat.bindings.openssl._conditional import CONDITIONAL_NAMES + + +def _openssl_assert(ok: bool) -> None: + if not ok: + errors = openssl.capture_error_stack() + + raise InternalError( + "Unknown OpenSSL error. This error is commonly encountered when " + "another library is not cleaning up the OpenSSL error stack. If " + "you are using cryptography with another library that uses " + "OpenSSL try disabling it before reporting a bug. Otherwise " + "please file an issue at https://github.com/pyca/cryptography/" + "issues with information on how to reproduce " + f"this. ({errors!r})", + errors, + ) + + +def build_conditional_library( + lib: typing.Any, + conditional_names: dict[str, typing.Callable[[], list[str]]], +) -> typing.Any: + conditional_lib = types.ModuleType("lib") + conditional_lib._original_lib = lib # type: ignore[attr-defined] + excluded_names = set() + for condition, names_cb in conditional_names.items(): + if not getattr(lib, condition): + excluded_names.update(names_cb()) + + for attr in dir(lib): + if attr not in excluded_names: + setattr(conditional_lib, attr, getattr(lib, attr)) + + return conditional_lib + + +class Binding: + """ + OpenSSL API wrapper. + """ + + lib: typing.ClassVar = None + ffi = _openssl.ffi + _lib_loaded = False + _init_lock = threading.Lock() + + def __init__(self) -> None: + self._ensure_ffi_initialized() + + @classmethod + def _ensure_ffi_initialized(cls) -> None: + with cls._init_lock: + if not cls._lib_loaded: + cls.lib = build_conditional_library( + _openssl.lib, CONDITIONAL_NAMES + ) + cls._lib_loaded = True + + @classmethod + def init_static_locks(cls) -> None: + cls._ensure_ffi_initialized() + + +def _verify_package_version(version: str) -> None: + # Occasionally we run into situations where the version of the Python + # package does not match the version of the shared object that is loaded. + # This may occur in environments where multiple versions of cryptography + # are installed and available in the python path. To avoid errors cropping + # up later this code checks that the currently imported package and the + # shared object that were loaded have the same version and raise an + # ImportError if they do not + so_package_version = _openssl.ffi.string( + _openssl.lib.CRYPTOGRAPHY_PACKAGE_VERSION + ) + if version.encode("ascii") != so_package_version: + raise ImportError( + "The version of cryptography does not match the loaded " + "shared object. This can happen if you have multiple copies of " + "cryptography installed in your Python path. Please try creating " + "a new virtual environment to resolve this issue. " + f"Loaded python version: {version}, " + f"shared object version: {so_package_version}" + ) + + _openssl_assert( + _openssl.lib.OpenSSL_version_num() == openssl.openssl_version(), + ) + + +_verify_package_version(cryptography.__version__) + +Binding.init_static_locks() + +if ( + sys.platform == "win32" + and os.environ.get("PROCESSOR_ARCHITEW6432") is not None +): + warnings.warn( + "You are using cryptography on a 32-bit Python on a 64-bit Windows " + "Operating System. Cryptography will be significantly faster if you " + "switch to using a 64-bit Python.", + UserWarning, + stacklevel=2, + ) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/decrepit/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/decrepit/__init__.py new file mode 100644 index 00000000..41d73186 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/decrepit/__init__.py @@ -0,0 +1,5 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/decrepit/ciphers/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/decrepit/ciphers/__init__.py new file mode 100644 index 00000000..41d73186 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/decrepit/ciphers/__init__.py @@ -0,0 +1,5 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/decrepit/ciphers/algorithms.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/decrepit/ciphers/algorithms.py new file mode 100644 index 00000000..a7d4aa3c --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/decrepit/ciphers/algorithms.py @@ -0,0 +1,107 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat.primitives._cipheralgorithm import ( + BlockCipherAlgorithm, + CipherAlgorithm, + _verify_key_size, +) + + +class ARC4(CipherAlgorithm): + name = "RC4" + key_sizes = frozenset([40, 56, 64, 80, 128, 160, 192, 256]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class TripleDES(BlockCipherAlgorithm): + name = "3DES" + block_size = 64 + key_sizes = frozenset([64, 128, 192]) + + def __init__(self, key: bytes): + if len(key) == 8: + key += key + key + elif len(key) == 16: + key += key[:8] + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class Blowfish(BlockCipherAlgorithm): + name = "Blowfish" + block_size = 64 + key_sizes = frozenset(range(32, 449, 8)) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class CAST5(BlockCipherAlgorithm): + name = "CAST5" + block_size = 64 + key_sizes = frozenset(range(40, 129, 8)) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class SEED(BlockCipherAlgorithm): + name = "SEED" + block_size = 128 + key_sizes = frozenset([128]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class IDEA(BlockCipherAlgorithm): + name = "IDEA" + block_size = 64 + key_sizes = frozenset([128]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +# This class only allows RC2 with a 128-bit key. No support for +# effective key bits or other key sizes is provided. +class RC2(BlockCipherAlgorithm): + name = "RC2" + block_size = 64 + key_sizes = frozenset([128]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/__init__.py new file mode 100644 index 00000000..b5093362 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/__init__.py @@ -0,0 +1,3 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/_asymmetric.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/_asymmetric.py new file mode 100644 index 00000000..ea55ffdf --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/_asymmetric.py @@ -0,0 +1,19 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + +# This exists to break an import cycle. It is normally accessible from the +# asymmetric padding module. + + +class AsymmetricPadding(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def name(self) -> str: + """ + A string naming this padding (e.g. "PSS", "PKCS1"). + """ diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/_cipheralgorithm.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/_cipheralgorithm.py new file mode 100644 index 00000000..588a6169 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/_cipheralgorithm.py @@ -0,0 +1,58 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + +from cryptography import utils + +# This exists to break an import cycle. It is normally accessible from the +# ciphers module. + + +class CipherAlgorithm(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def name(self) -> str: + """ + A string naming this mode (e.g. "AES", "Camellia"). + """ + + @property + @abc.abstractmethod + def key_sizes(self) -> frozenset[int]: + """ + Valid key sizes for this algorithm in bits + """ + + @property + @abc.abstractmethod + def key_size(self) -> int: + """ + The size of the key being used as an integer in bits (e.g. 128, 256). + """ + + +class BlockCipherAlgorithm(CipherAlgorithm): + key: bytes + + @property + @abc.abstractmethod + def block_size(self) -> int: + """ + The size of a block as an integer in bits (e.g. 64, 128). + """ + + +def _verify_key_size(algorithm: CipherAlgorithm, key: bytes) -> bytes: + # Verify that the key is instance of bytes + utils._check_byteslike("key", key) + + # Verify that the key size matches the expected key size + if len(key) * 8 not in algorithm.key_sizes: + raise ValueError( + f"Invalid key size ({len(key) * 8}) for {algorithm.name}." + ) + return key diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/_serialization.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/_serialization.py new file mode 100644 index 00000000..46157721 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/_serialization.py @@ -0,0 +1,169 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + +from cryptography import utils +from cryptography.hazmat.primitives.hashes import HashAlgorithm + +# This exists to break an import cycle. These classes are normally accessible +# from the serialization module. + + +class PBES(utils.Enum): + PBESv1SHA1And3KeyTripleDESCBC = "PBESv1 using SHA1 and 3-Key TripleDES" + PBESv2SHA256AndAES256CBC = "PBESv2 using SHA256 PBKDF2 and AES256 CBC" + + +class Encoding(utils.Enum): + PEM = "PEM" + DER = "DER" + OpenSSH = "OpenSSH" + Raw = "Raw" + X962 = "ANSI X9.62" + SMIME = "S/MIME" + + +class PrivateFormat(utils.Enum): + PKCS8 = "PKCS8" + TraditionalOpenSSL = "TraditionalOpenSSL" + Raw = "Raw" + OpenSSH = "OpenSSH" + PKCS12 = "PKCS12" + + def encryption_builder(self) -> KeySerializationEncryptionBuilder: + if self not in (PrivateFormat.OpenSSH, PrivateFormat.PKCS12): + raise ValueError( + "encryption_builder only supported with PrivateFormat.OpenSSH" + " and PrivateFormat.PKCS12" + ) + return KeySerializationEncryptionBuilder(self) + + +class PublicFormat(utils.Enum): + SubjectPublicKeyInfo = "X.509 subjectPublicKeyInfo with PKCS#1" + PKCS1 = "Raw PKCS#1" + OpenSSH = "OpenSSH" + Raw = "Raw" + CompressedPoint = "X9.62 Compressed Point" + UncompressedPoint = "X9.62 Uncompressed Point" + + +class ParameterFormat(utils.Enum): + PKCS3 = "PKCS3" + + +class KeySerializationEncryption(metaclass=abc.ABCMeta): + pass + + +class BestAvailableEncryption(KeySerializationEncryption): + def __init__(self, password: bytes): + if not isinstance(password, bytes) or len(password) == 0: + raise ValueError("Password must be 1 or more bytes.") + + self.password = password + + +class NoEncryption(KeySerializationEncryption): + pass + + +class KeySerializationEncryptionBuilder: + def __init__( + self, + format: PrivateFormat, + *, + _kdf_rounds: int | None = None, + _hmac_hash: HashAlgorithm | None = None, + _key_cert_algorithm: PBES | None = None, + ) -> None: + self._format = format + + self._kdf_rounds = _kdf_rounds + self._hmac_hash = _hmac_hash + self._key_cert_algorithm = _key_cert_algorithm + + def kdf_rounds(self, rounds: int) -> KeySerializationEncryptionBuilder: + if self._kdf_rounds is not None: + raise ValueError("kdf_rounds already set") + + if not isinstance(rounds, int): + raise TypeError("kdf_rounds must be an integer") + + if rounds < 1: + raise ValueError("kdf_rounds must be a positive integer") + + return KeySerializationEncryptionBuilder( + self._format, + _kdf_rounds=rounds, + _hmac_hash=self._hmac_hash, + _key_cert_algorithm=self._key_cert_algorithm, + ) + + def hmac_hash( + self, algorithm: HashAlgorithm + ) -> KeySerializationEncryptionBuilder: + if self._format is not PrivateFormat.PKCS12: + raise TypeError( + "hmac_hash only supported with PrivateFormat.PKCS12" + ) + + if self._hmac_hash is not None: + raise ValueError("hmac_hash already set") + return KeySerializationEncryptionBuilder( + self._format, + _kdf_rounds=self._kdf_rounds, + _hmac_hash=algorithm, + _key_cert_algorithm=self._key_cert_algorithm, + ) + + def key_cert_algorithm( + self, algorithm: PBES + ) -> KeySerializationEncryptionBuilder: + if self._format is not PrivateFormat.PKCS12: + raise TypeError( + "key_cert_algorithm only supported with " + "PrivateFormat.PKCS12" + ) + if self._key_cert_algorithm is not None: + raise ValueError("key_cert_algorithm already set") + return KeySerializationEncryptionBuilder( + self._format, + _kdf_rounds=self._kdf_rounds, + _hmac_hash=self._hmac_hash, + _key_cert_algorithm=algorithm, + ) + + def build(self, password: bytes) -> KeySerializationEncryption: + if not isinstance(password, bytes) or len(password) == 0: + raise ValueError("Password must be 1 or more bytes.") + + return _KeySerializationEncryption( + self._format, + password, + kdf_rounds=self._kdf_rounds, + hmac_hash=self._hmac_hash, + key_cert_algorithm=self._key_cert_algorithm, + ) + + +class _KeySerializationEncryption(KeySerializationEncryption): + def __init__( + self, + format: PrivateFormat, + password: bytes, + *, + kdf_rounds: int | None, + hmac_hash: HashAlgorithm | None, + key_cert_algorithm: PBES | None, + ): + self._format = format + self.password = password + + self._kdf_rounds = kdf_rounds + self._hmac_hash = hmac_hash + self._key_cert_algorithm = key_cert_algorithm diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/__init__.py new file mode 100644 index 00000000..b5093362 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/__init__.py @@ -0,0 +1,3 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/dh.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/dh.py new file mode 100644 index 00000000..31c9748a --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/dh.py @@ -0,0 +1,135 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives import _serialization + +generate_parameters = rust_openssl.dh.generate_parameters + + +DHPrivateNumbers = rust_openssl.dh.DHPrivateNumbers +DHPublicNumbers = rust_openssl.dh.DHPublicNumbers +DHParameterNumbers = rust_openssl.dh.DHParameterNumbers + + +class DHParameters(metaclass=abc.ABCMeta): + @abc.abstractmethod + def generate_private_key(self) -> DHPrivateKey: + """ + Generates and returns a DHPrivateKey. + """ + + @abc.abstractmethod + def parameter_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.ParameterFormat, + ) -> bytes: + """ + Returns the parameters serialized as bytes. + """ + + @abc.abstractmethod + def parameter_numbers(self) -> DHParameterNumbers: + """ + Returns a DHParameterNumbers. + """ + + +DHParametersWithSerialization = DHParameters +DHParameters.register(rust_openssl.dh.DHParameters) + + +class DHPublicKey(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def key_size(self) -> int: + """ + The bit length of the prime modulus. + """ + + @abc.abstractmethod + def parameters(self) -> DHParameters: + """ + The DHParameters object associated with this public key. + """ + + @abc.abstractmethod + def public_numbers(self) -> DHPublicNumbers: + """ + Returns a DHPublicNumbers. + """ + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + +DHPublicKeyWithSerialization = DHPublicKey +DHPublicKey.register(rust_openssl.dh.DHPublicKey) + + +class DHPrivateKey(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def key_size(self) -> int: + """ + The bit length of the prime modulus. + """ + + @abc.abstractmethod + def public_key(self) -> DHPublicKey: + """ + The DHPublicKey associated with this private key. + """ + + @abc.abstractmethod + def parameters(self) -> DHParameters: + """ + The DHParameters object associated with this private key. + """ + + @abc.abstractmethod + def exchange(self, peer_public_key: DHPublicKey) -> bytes: + """ + Given peer's DHPublicKey, carry out the key exchange and + return shared key as bytes. + """ + + @abc.abstractmethod + def private_numbers(self) -> DHPrivateNumbers: + """ + Returns a DHPrivateNumbers. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + +DHPrivateKeyWithSerialization = DHPrivateKey +DHPrivateKey.register(rust_openssl.dh.DHPrivateKey) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/dsa.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/dsa.py new file mode 100644 index 00000000..6dd34c0e --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/dsa.py @@ -0,0 +1,154 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc +import typing + +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives import _serialization, hashes +from cryptography.hazmat.primitives.asymmetric import utils as asym_utils + + +class DSAParameters(metaclass=abc.ABCMeta): + @abc.abstractmethod + def generate_private_key(self) -> DSAPrivateKey: + """ + Generates and returns a DSAPrivateKey. + """ + + @abc.abstractmethod + def parameter_numbers(self) -> DSAParameterNumbers: + """ + Returns a DSAParameterNumbers. + """ + + +DSAParametersWithNumbers = DSAParameters +DSAParameters.register(rust_openssl.dsa.DSAParameters) + + +class DSAPrivateKey(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def key_size(self) -> int: + """ + The bit length of the prime modulus. + """ + + @abc.abstractmethod + def public_key(self) -> DSAPublicKey: + """ + The DSAPublicKey associated with this private key. + """ + + @abc.abstractmethod + def parameters(self) -> DSAParameters: + """ + The DSAParameters object associated with this private key. + """ + + @abc.abstractmethod + def sign( + self, + data: bytes, + algorithm: asym_utils.Prehashed | hashes.HashAlgorithm, + ) -> bytes: + """ + Signs the data + """ + + @abc.abstractmethod + def private_numbers(self) -> DSAPrivateNumbers: + """ + Returns a DSAPrivateNumbers. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + +DSAPrivateKeyWithSerialization = DSAPrivateKey +DSAPrivateKey.register(rust_openssl.dsa.DSAPrivateKey) + + +class DSAPublicKey(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def key_size(self) -> int: + """ + The bit length of the prime modulus. + """ + + @abc.abstractmethod + def parameters(self) -> DSAParameters: + """ + The DSAParameters object associated with this public key. + """ + + @abc.abstractmethod + def public_numbers(self) -> DSAPublicNumbers: + """ + Returns a DSAPublicNumbers. + """ + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + @abc.abstractmethod + def verify( + self, + signature: bytes, + data: bytes, + algorithm: asym_utils.Prehashed | hashes.HashAlgorithm, + ) -> None: + """ + Verifies the signature of the data. + """ + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + +DSAPublicKeyWithSerialization = DSAPublicKey +DSAPublicKey.register(rust_openssl.dsa.DSAPublicKey) + +DSAPrivateNumbers = rust_openssl.dsa.DSAPrivateNumbers +DSAPublicNumbers = rust_openssl.dsa.DSAPublicNumbers +DSAParameterNumbers = rust_openssl.dsa.DSAParameterNumbers + + +def generate_parameters( + key_size: int, backend: typing.Any = None +) -> DSAParameters: + if key_size not in (1024, 2048, 3072, 4096): + raise ValueError("Key size must be 1024, 2048, 3072, or 4096 bits.") + + return rust_openssl.dsa.generate_parameters(key_size) + + +def generate_private_key( + key_size: int, backend: typing.Any = None +) -> DSAPrivateKey: + parameters = generate_parameters(key_size) + return parameters.generate_private_key() diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/ec.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/ec.py new file mode 100644 index 00000000..da1fbea1 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/ec.py @@ -0,0 +1,403 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc +import typing + +from cryptography import utils +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat._oid import ObjectIdentifier +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives import _serialization, hashes +from cryptography.hazmat.primitives.asymmetric import utils as asym_utils + + +class EllipticCurveOID: + SECP192R1 = ObjectIdentifier("1.2.840.10045.3.1.1") + SECP224R1 = ObjectIdentifier("1.3.132.0.33") + SECP256K1 = ObjectIdentifier("1.3.132.0.10") + SECP256R1 = ObjectIdentifier("1.2.840.10045.3.1.7") + SECP384R1 = ObjectIdentifier("1.3.132.0.34") + SECP521R1 = ObjectIdentifier("1.3.132.0.35") + BRAINPOOLP256R1 = ObjectIdentifier("1.3.36.3.3.2.8.1.1.7") + BRAINPOOLP384R1 = ObjectIdentifier("1.3.36.3.3.2.8.1.1.11") + BRAINPOOLP512R1 = ObjectIdentifier("1.3.36.3.3.2.8.1.1.13") + SECT163K1 = ObjectIdentifier("1.3.132.0.1") + SECT163R2 = ObjectIdentifier("1.3.132.0.15") + SECT233K1 = ObjectIdentifier("1.3.132.0.26") + SECT233R1 = ObjectIdentifier("1.3.132.0.27") + SECT283K1 = ObjectIdentifier("1.3.132.0.16") + SECT283R1 = ObjectIdentifier("1.3.132.0.17") + SECT409K1 = ObjectIdentifier("1.3.132.0.36") + SECT409R1 = ObjectIdentifier("1.3.132.0.37") + SECT571K1 = ObjectIdentifier("1.3.132.0.38") + SECT571R1 = ObjectIdentifier("1.3.132.0.39") + + +class EllipticCurve(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def name(self) -> str: + """ + The name of the curve. e.g. secp256r1. + """ + + @property + @abc.abstractmethod + def key_size(self) -> int: + """ + Bit size of a secret scalar for the curve. + """ + + +class EllipticCurveSignatureAlgorithm(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def algorithm( + self, + ) -> asym_utils.Prehashed | hashes.HashAlgorithm: + """ + The digest algorithm used with this signature. + """ + + +class EllipticCurvePrivateKey(metaclass=abc.ABCMeta): + @abc.abstractmethod + def exchange( + self, algorithm: ECDH, peer_public_key: EllipticCurvePublicKey + ) -> bytes: + """ + Performs a key exchange operation using the provided algorithm with the + provided peer's public key. + """ + + @abc.abstractmethod + def public_key(self) -> EllipticCurvePublicKey: + """ + The EllipticCurvePublicKey for this private key. + """ + + @property + @abc.abstractmethod + def curve(self) -> EllipticCurve: + """ + The EllipticCurve that this key is on. + """ + + @property + @abc.abstractmethod + def key_size(self) -> int: + """ + Bit size of a secret scalar for the curve. + """ + + @abc.abstractmethod + def sign( + self, + data: bytes, + signature_algorithm: EllipticCurveSignatureAlgorithm, + ) -> bytes: + """ + Signs the data + """ + + @abc.abstractmethod + def private_numbers(self) -> EllipticCurvePrivateNumbers: + """ + Returns an EllipticCurvePrivateNumbers. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + +EllipticCurvePrivateKeyWithSerialization = EllipticCurvePrivateKey +EllipticCurvePrivateKey.register(rust_openssl.ec.ECPrivateKey) + + +class EllipticCurvePublicKey(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def curve(self) -> EllipticCurve: + """ + The EllipticCurve that this key is on. + """ + + @property + @abc.abstractmethod + def key_size(self) -> int: + """ + Bit size of a secret scalar for the curve. + """ + + @abc.abstractmethod + def public_numbers(self) -> EllipticCurvePublicNumbers: + """ + Returns an EllipticCurvePublicNumbers. + """ + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + @abc.abstractmethod + def verify( + self, + signature: bytes, + data: bytes, + signature_algorithm: EllipticCurveSignatureAlgorithm, + ) -> None: + """ + Verifies the signature of the data. + """ + + @classmethod + def from_encoded_point( + cls, curve: EllipticCurve, data: bytes + ) -> EllipticCurvePublicKey: + utils._check_bytes("data", data) + + if len(data) == 0: + raise ValueError("data must not be an empty byte string") + + if data[0] not in [0x02, 0x03, 0x04]: + raise ValueError("Unsupported elliptic curve point type") + + return rust_openssl.ec.from_public_bytes(curve, data) + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + +EllipticCurvePublicKeyWithSerialization = EllipticCurvePublicKey +EllipticCurvePublicKey.register(rust_openssl.ec.ECPublicKey) + +EllipticCurvePrivateNumbers = rust_openssl.ec.EllipticCurvePrivateNumbers +EllipticCurvePublicNumbers = rust_openssl.ec.EllipticCurvePublicNumbers + + +class SECT571R1(EllipticCurve): + name = "sect571r1" + key_size = 570 + + +class SECT409R1(EllipticCurve): + name = "sect409r1" + key_size = 409 + + +class SECT283R1(EllipticCurve): + name = "sect283r1" + key_size = 283 + + +class SECT233R1(EllipticCurve): + name = "sect233r1" + key_size = 233 + + +class SECT163R2(EllipticCurve): + name = "sect163r2" + key_size = 163 + + +class SECT571K1(EllipticCurve): + name = "sect571k1" + key_size = 571 + + +class SECT409K1(EllipticCurve): + name = "sect409k1" + key_size = 409 + + +class SECT283K1(EllipticCurve): + name = "sect283k1" + key_size = 283 + + +class SECT233K1(EllipticCurve): + name = "sect233k1" + key_size = 233 + + +class SECT163K1(EllipticCurve): + name = "sect163k1" + key_size = 163 + + +class SECP521R1(EllipticCurve): + name = "secp521r1" + key_size = 521 + + +class SECP384R1(EllipticCurve): + name = "secp384r1" + key_size = 384 + + +class SECP256R1(EllipticCurve): + name = "secp256r1" + key_size = 256 + + +class SECP256K1(EllipticCurve): + name = "secp256k1" + key_size = 256 + + +class SECP224R1(EllipticCurve): + name = "secp224r1" + key_size = 224 + + +class SECP192R1(EllipticCurve): + name = "secp192r1" + key_size = 192 + + +class BrainpoolP256R1(EllipticCurve): + name = "brainpoolP256r1" + key_size = 256 + + +class BrainpoolP384R1(EllipticCurve): + name = "brainpoolP384r1" + key_size = 384 + + +class BrainpoolP512R1(EllipticCurve): + name = "brainpoolP512r1" + key_size = 512 + + +_CURVE_TYPES: dict[str, EllipticCurve] = { + "prime192v1": SECP192R1(), + "prime256v1": SECP256R1(), + "secp192r1": SECP192R1(), + "secp224r1": SECP224R1(), + "secp256r1": SECP256R1(), + "secp384r1": SECP384R1(), + "secp521r1": SECP521R1(), + "secp256k1": SECP256K1(), + "sect163k1": SECT163K1(), + "sect233k1": SECT233K1(), + "sect283k1": SECT283K1(), + "sect409k1": SECT409K1(), + "sect571k1": SECT571K1(), + "sect163r2": SECT163R2(), + "sect233r1": SECT233R1(), + "sect283r1": SECT283R1(), + "sect409r1": SECT409R1(), + "sect571r1": SECT571R1(), + "brainpoolP256r1": BrainpoolP256R1(), + "brainpoolP384r1": BrainpoolP384R1(), + "brainpoolP512r1": BrainpoolP512R1(), +} + + +class ECDSA(EllipticCurveSignatureAlgorithm): + def __init__( + self, + algorithm: asym_utils.Prehashed | hashes.HashAlgorithm, + deterministic_signing: bool = False, + ): + from cryptography.hazmat.backends.openssl.backend import backend + + if ( + deterministic_signing + and not backend.ecdsa_deterministic_supported() + ): + raise UnsupportedAlgorithm( + "ECDSA with deterministic signature (RFC 6979) is not " + "supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + self._algorithm = algorithm + self._deterministic_signing = deterministic_signing + + @property + def algorithm( + self, + ) -> asym_utils.Prehashed | hashes.HashAlgorithm: + return self._algorithm + + @property + def deterministic_signing( + self, + ) -> bool: + return self._deterministic_signing + + +generate_private_key = rust_openssl.ec.generate_private_key + + +def derive_private_key( + private_value: int, + curve: EllipticCurve, + backend: typing.Any = None, +) -> EllipticCurvePrivateKey: + if not isinstance(private_value, int): + raise TypeError("private_value must be an integer type.") + + if private_value <= 0: + raise ValueError("private_value must be a positive integer.") + + return rust_openssl.ec.derive_private_key(private_value, curve) + + +class ECDH: + pass + + +_OID_TO_CURVE = { + EllipticCurveOID.SECP192R1: SECP192R1, + EllipticCurveOID.SECP224R1: SECP224R1, + EllipticCurveOID.SECP256K1: SECP256K1, + EllipticCurveOID.SECP256R1: SECP256R1, + EllipticCurveOID.SECP384R1: SECP384R1, + EllipticCurveOID.SECP521R1: SECP521R1, + EllipticCurveOID.BRAINPOOLP256R1: BrainpoolP256R1, + EllipticCurveOID.BRAINPOOLP384R1: BrainpoolP384R1, + EllipticCurveOID.BRAINPOOLP512R1: BrainpoolP512R1, + EllipticCurveOID.SECT163K1: SECT163K1, + EllipticCurveOID.SECT163R2: SECT163R2, + EllipticCurveOID.SECT233K1: SECT233K1, + EllipticCurveOID.SECT233R1: SECT233R1, + EllipticCurveOID.SECT283K1: SECT283K1, + EllipticCurveOID.SECT283R1: SECT283R1, + EllipticCurveOID.SECT409K1: SECT409K1, + EllipticCurveOID.SECT409R1: SECT409R1, + EllipticCurveOID.SECT571K1: SECT571K1, + EllipticCurveOID.SECT571R1: SECT571R1, +} + + +def get_curve_for_oid(oid: ObjectIdentifier) -> type[EllipticCurve]: + try: + return _OID_TO_CURVE[oid] + except KeyError: + raise LookupError( + "The provided object identifier has no matching elliptic " + "curve class" + ) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/ed25519.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/ed25519.py new file mode 100644 index 00000000..3a26185d --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/ed25519.py @@ -0,0 +1,116 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives import _serialization + + +class Ed25519PublicKey(metaclass=abc.ABCMeta): + @classmethod + def from_public_bytes(cls, data: bytes) -> Ed25519PublicKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed25519_supported(): + raise UnsupportedAlgorithm( + "ed25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + return rust_openssl.ed25519.from_public_bytes(data) + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + The serialized bytes of the public key. + """ + + @abc.abstractmethod + def public_bytes_raw(self) -> bytes: + """ + The raw bytes of the public key. + Equivalent to public_bytes(Raw, Raw). + """ + + @abc.abstractmethod + def verify(self, signature: bytes, data: bytes) -> None: + """ + Verify the signature. + """ + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + +Ed25519PublicKey.register(rust_openssl.ed25519.Ed25519PublicKey) + + +class Ed25519PrivateKey(metaclass=abc.ABCMeta): + @classmethod + def generate(cls) -> Ed25519PrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed25519_supported(): + raise UnsupportedAlgorithm( + "ed25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + return rust_openssl.ed25519.generate_key() + + @classmethod + def from_private_bytes(cls, data: bytes) -> Ed25519PrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed25519_supported(): + raise UnsupportedAlgorithm( + "ed25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + return rust_openssl.ed25519.from_private_bytes(data) + + @abc.abstractmethod + def public_key(self) -> Ed25519PublicKey: + """ + The Ed25519PublicKey derived from the private key. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + The serialized bytes of the private key. + """ + + @abc.abstractmethod + def private_bytes_raw(self) -> bytes: + """ + The raw bytes of the private key. + Equivalent to private_bytes(Raw, Raw, NoEncryption()). + """ + + @abc.abstractmethod + def sign(self, data: bytes) -> bytes: + """ + Signs the data. + """ + + +Ed25519PrivateKey.register(rust_openssl.ed25519.Ed25519PrivateKey) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/ed448.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/ed448.py new file mode 100644 index 00000000..78c82c4a --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/ed448.py @@ -0,0 +1,118 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives import _serialization + + +class Ed448PublicKey(metaclass=abc.ABCMeta): + @classmethod + def from_public_bytes(cls, data: bytes) -> Ed448PublicKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed448_supported(): + raise UnsupportedAlgorithm( + "ed448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + return rust_openssl.ed448.from_public_bytes(data) + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + The serialized bytes of the public key. + """ + + @abc.abstractmethod + def public_bytes_raw(self) -> bytes: + """ + The raw bytes of the public key. + Equivalent to public_bytes(Raw, Raw). + """ + + @abc.abstractmethod + def verify(self, signature: bytes, data: bytes) -> None: + """ + Verify the signature. + """ + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + +if hasattr(rust_openssl, "ed448"): + Ed448PublicKey.register(rust_openssl.ed448.Ed448PublicKey) + + +class Ed448PrivateKey(metaclass=abc.ABCMeta): + @classmethod + def generate(cls) -> Ed448PrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed448_supported(): + raise UnsupportedAlgorithm( + "ed448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + return rust_openssl.ed448.generate_key() + + @classmethod + def from_private_bytes(cls, data: bytes) -> Ed448PrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.ed448_supported(): + raise UnsupportedAlgorithm( + "ed448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM, + ) + + return rust_openssl.ed448.from_private_bytes(data) + + @abc.abstractmethod + def public_key(self) -> Ed448PublicKey: + """ + The Ed448PublicKey derived from the private key. + """ + + @abc.abstractmethod + def sign(self, data: bytes) -> bytes: + """ + Signs the data. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + The serialized bytes of the private key. + """ + + @abc.abstractmethod + def private_bytes_raw(self) -> bytes: + """ + The raw bytes of the private key. + Equivalent to private_bytes(Raw, Raw, NoEncryption()). + """ + + +if hasattr(rust_openssl, "x448"): + Ed448PrivateKey.register(rust_openssl.ed448.Ed448PrivateKey) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/padding.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/padding.py new file mode 100644 index 00000000..b4babf44 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/padding.py @@ -0,0 +1,113 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + +from cryptography.hazmat.primitives import hashes +from cryptography.hazmat.primitives._asymmetric import ( + AsymmetricPadding as AsymmetricPadding, +) +from cryptography.hazmat.primitives.asymmetric import rsa + + +class PKCS1v15(AsymmetricPadding): + name = "EMSA-PKCS1-v1_5" + + +class _MaxLength: + "Sentinel value for `MAX_LENGTH`." + + +class _Auto: + "Sentinel value for `AUTO`." + + +class _DigestLength: + "Sentinel value for `DIGEST_LENGTH`." + + +class PSS(AsymmetricPadding): + MAX_LENGTH = _MaxLength() + AUTO = _Auto() + DIGEST_LENGTH = _DigestLength() + name = "EMSA-PSS" + _salt_length: int | _MaxLength | _Auto | _DigestLength + + def __init__( + self, + mgf: MGF, + salt_length: int | _MaxLength | _Auto | _DigestLength, + ) -> None: + self._mgf = mgf + + if not isinstance( + salt_length, (int, _MaxLength, _Auto, _DigestLength) + ): + raise TypeError( + "salt_length must be an integer, MAX_LENGTH, " + "DIGEST_LENGTH, or AUTO" + ) + + if isinstance(salt_length, int) and salt_length < 0: + raise ValueError("salt_length must be zero or greater.") + + self._salt_length = salt_length + + @property + def mgf(self) -> MGF: + return self._mgf + + +class OAEP(AsymmetricPadding): + name = "EME-OAEP" + + def __init__( + self, + mgf: MGF, + algorithm: hashes.HashAlgorithm, + label: bytes | None, + ): + if not isinstance(algorithm, hashes.HashAlgorithm): + raise TypeError("Expected instance of hashes.HashAlgorithm.") + + self._mgf = mgf + self._algorithm = algorithm + self._label = label + + @property + def algorithm(self) -> hashes.HashAlgorithm: + return self._algorithm + + @property + def mgf(self) -> MGF: + return self._mgf + + +class MGF(metaclass=abc.ABCMeta): + _algorithm: hashes.HashAlgorithm + + +class MGF1(MGF): + MAX_LENGTH = _MaxLength() + + def __init__(self, algorithm: hashes.HashAlgorithm): + if not isinstance(algorithm, hashes.HashAlgorithm): + raise TypeError("Expected instance of hashes.HashAlgorithm.") + + self._algorithm = algorithm + + +def calculate_max_pss_salt_length( + key: rsa.RSAPrivateKey | rsa.RSAPublicKey, + hash_algorithm: hashes.HashAlgorithm, +) -> int: + if not isinstance(key, (rsa.RSAPrivateKey, rsa.RSAPublicKey)): + raise TypeError("key must be an RSA public or private key") + # bit length - 1 per RFC 3447 + emlen = (key.key_size + 6) // 8 + salt_length = emlen - hash_algorithm.digest_size - 2 + assert salt_length >= 0 + return salt_length diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/rsa.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/rsa.py new file mode 100644 index 00000000..7a387b5e --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/rsa.py @@ -0,0 +1,260 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc +import typing +from math import gcd + +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives import _serialization, hashes +from cryptography.hazmat.primitives._asymmetric import AsymmetricPadding +from cryptography.hazmat.primitives.asymmetric import utils as asym_utils + + +class RSAPrivateKey(metaclass=abc.ABCMeta): + @abc.abstractmethod + def decrypt(self, ciphertext: bytes, padding: AsymmetricPadding) -> bytes: + """ + Decrypts the provided ciphertext. + """ + + @property + @abc.abstractmethod + def key_size(self) -> int: + """ + The bit length of the public modulus. + """ + + @abc.abstractmethod + def public_key(self) -> RSAPublicKey: + """ + The RSAPublicKey associated with this private key. + """ + + @abc.abstractmethod + def sign( + self, + data: bytes, + padding: AsymmetricPadding, + algorithm: asym_utils.Prehashed | hashes.HashAlgorithm, + ) -> bytes: + """ + Signs the data. + """ + + @abc.abstractmethod + def private_numbers(self) -> RSAPrivateNumbers: + """ + Returns an RSAPrivateNumbers. + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + +RSAPrivateKeyWithSerialization = RSAPrivateKey +RSAPrivateKey.register(rust_openssl.rsa.RSAPrivateKey) + + +class RSAPublicKey(metaclass=abc.ABCMeta): + @abc.abstractmethod + def encrypt(self, plaintext: bytes, padding: AsymmetricPadding) -> bytes: + """ + Encrypts the given plaintext. + """ + + @property + @abc.abstractmethod + def key_size(self) -> int: + """ + The bit length of the public modulus. + """ + + @abc.abstractmethod + def public_numbers(self) -> RSAPublicNumbers: + """ + Returns an RSAPublicNumbers + """ + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + Returns the key serialized as bytes. + """ + + @abc.abstractmethod + def verify( + self, + signature: bytes, + data: bytes, + padding: AsymmetricPadding, + algorithm: asym_utils.Prehashed | hashes.HashAlgorithm, + ) -> None: + """ + Verifies the signature of the data. + """ + + @abc.abstractmethod + def recover_data_from_signature( + self, + signature: bytes, + padding: AsymmetricPadding, + algorithm: hashes.HashAlgorithm | None, + ) -> bytes: + """ + Recovers the original data from the signature. + """ + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + +RSAPublicKeyWithSerialization = RSAPublicKey +RSAPublicKey.register(rust_openssl.rsa.RSAPublicKey) + +RSAPrivateNumbers = rust_openssl.rsa.RSAPrivateNumbers +RSAPublicNumbers = rust_openssl.rsa.RSAPublicNumbers + + +def generate_private_key( + public_exponent: int, + key_size: int, + backend: typing.Any = None, +) -> RSAPrivateKey: + _verify_rsa_parameters(public_exponent, key_size) + return rust_openssl.rsa.generate_private_key(public_exponent, key_size) + + +def _verify_rsa_parameters(public_exponent: int, key_size: int) -> None: + if public_exponent not in (3, 65537): + raise ValueError( + "public_exponent must be either 3 (for legacy compatibility) or " + "65537. Almost everyone should choose 65537 here!" + ) + + if key_size < 1024: + raise ValueError("key_size must be at least 1024-bits.") + + +def _modinv(e: int, m: int) -> int: + """ + Modular Multiplicative Inverse. Returns x such that: (x*e) mod m == 1 + """ + x1, x2 = 1, 0 + a, b = e, m + while b > 0: + q, r = divmod(a, b) + xn = x1 - q * x2 + a, b, x1, x2 = b, r, x2, xn + return x1 % m + + +def rsa_crt_iqmp(p: int, q: int) -> int: + """ + Compute the CRT (q ** -1) % p value from RSA primes p and q. + """ + return _modinv(q, p) + + +def rsa_crt_dmp1(private_exponent: int, p: int) -> int: + """ + Compute the CRT private_exponent % (p - 1) value from the RSA + private_exponent (d) and p. + """ + return private_exponent % (p - 1) + + +def rsa_crt_dmq1(private_exponent: int, q: int) -> int: + """ + Compute the CRT private_exponent % (q - 1) value from the RSA + private_exponent (d) and q. + """ + return private_exponent % (q - 1) + + +def rsa_recover_private_exponent(e: int, p: int, q: int) -> int: + """ + Compute the RSA private_exponent (d) given the public exponent (e) + and the RSA primes p and q. + + This uses the Carmichael totient function to generate the + smallest possible working value of the private exponent. + """ + # This lambda_n is the Carmichael totient function. + # The original RSA paper uses the Euler totient function + # here: phi_n = (p - 1) * (q - 1) + # Either version of the private exponent will work, but the + # one generated by the older formulation may be larger + # than necessary. (lambda_n always divides phi_n) + # + # TODO: Replace with lcm(p - 1, q - 1) once the minimum + # supported Python version is >= 3.9. + lambda_n = (p - 1) * (q - 1) // gcd(p - 1, q - 1) + return _modinv(e, lambda_n) + + +# Controls the number of iterations rsa_recover_prime_factors will perform +# to obtain the prime factors. Each iteration increments by 2 so the actual +# maximum attempts is half this number. +_MAX_RECOVERY_ATTEMPTS = 1000 + + +def rsa_recover_prime_factors(n: int, e: int, d: int) -> tuple[int, int]: + """ + Compute factors p and q from the private exponent d. We assume that n has + no more than two factors. This function is adapted from code in PyCrypto. + """ + # See 8.2.2(i) in Handbook of Applied Cryptography. + ktot = d * e - 1 + # The quantity d*e-1 is a multiple of phi(n), even, + # and can be represented as t*2^s. + t = ktot + while t % 2 == 0: + t = t // 2 + # Cycle through all multiplicative inverses in Zn. + # The algorithm is non-deterministic, but there is a 50% chance + # any candidate a leads to successful factoring. + # See "Digitalized Signatures and Public Key Functions as Intractable + # as Factorization", M. Rabin, 1979 + spotted = False + a = 2 + while not spotted and a < _MAX_RECOVERY_ATTEMPTS: + k = t + # Cycle through all values a^{t*2^i}=a^k + while k < ktot: + cand = pow(a, k, n) + # Check if a^k is a non-trivial root of unity (mod n) + if cand != 1 and cand != (n - 1) and pow(cand, 2, n) == 1: + # We have found a number such that (cand-1)(cand+1)=0 (mod n). + # Either of the terms divides n. + p = gcd(cand + 1, n) + spotted = True + break + k *= 2 + # This value was not any good... let's try another! + a += 2 + if not spotted: + raise ValueError("Unable to compute factors p and q from exponent d.") + # Found ! + q, r = divmod(n, p) + assert r == 0 + p, q = sorted((p, q), reverse=True) + return (p, q) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/types.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/types.py new file mode 100644 index 00000000..1fe4eaf5 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/types.py @@ -0,0 +1,111 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import typing + +from cryptography import utils +from cryptography.hazmat.primitives.asymmetric import ( + dh, + dsa, + ec, + ed448, + ed25519, + rsa, + x448, + x25519, +) + +# Every asymmetric key type +PublicKeyTypes = typing.Union[ + dh.DHPublicKey, + dsa.DSAPublicKey, + rsa.RSAPublicKey, + ec.EllipticCurvePublicKey, + ed25519.Ed25519PublicKey, + ed448.Ed448PublicKey, + x25519.X25519PublicKey, + x448.X448PublicKey, +] +PUBLIC_KEY_TYPES = PublicKeyTypes +utils.deprecated( + PUBLIC_KEY_TYPES, + __name__, + "Use PublicKeyTypes instead", + utils.DeprecatedIn40, + name="PUBLIC_KEY_TYPES", +) +# Every asymmetric key type +PrivateKeyTypes = typing.Union[ + dh.DHPrivateKey, + ed25519.Ed25519PrivateKey, + ed448.Ed448PrivateKey, + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ec.EllipticCurvePrivateKey, + x25519.X25519PrivateKey, + x448.X448PrivateKey, +] +PRIVATE_KEY_TYPES = PrivateKeyTypes +utils.deprecated( + PRIVATE_KEY_TYPES, + __name__, + "Use PrivateKeyTypes instead", + utils.DeprecatedIn40, + name="PRIVATE_KEY_TYPES", +) +# Just the key types we allow to be used for x509 signing. This mirrors +# the certificate public key types +CertificateIssuerPrivateKeyTypes = typing.Union[ + ed25519.Ed25519PrivateKey, + ed448.Ed448PrivateKey, + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ec.EllipticCurvePrivateKey, +] +CERTIFICATE_PRIVATE_KEY_TYPES = CertificateIssuerPrivateKeyTypes +utils.deprecated( + CERTIFICATE_PRIVATE_KEY_TYPES, + __name__, + "Use CertificateIssuerPrivateKeyTypes instead", + utils.DeprecatedIn40, + name="CERTIFICATE_PRIVATE_KEY_TYPES", +) +# Just the key types we allow to be used for x509 signing. This mirrors +# the certificate private key types +CertificateIssuerPublicKeyTypes = typing.Union[ + dsa.DSAPublicKey, + rsa.RSAPublicKey, + ec.EllipticCurvePublicKey, + ed25519.Ed25519PublicKey, + ed448.Ed448PublicKey, +] +CERTIFICATE_ISSUER_PUBLIC_KEY_TYPES = CertificateIssuerPublicKeyTypes +utils.deprecated( + CERTIFICATE_ISSUER_PUBLIC_KEY_TYPES, + __name__, + "Use CertificateIssuerPublicKeyTypes instead", + utils.DeprecatedIn40, + name="CERTIFICATE_ISSUER_PUBLIC_KEY_TYPES", +) +# This type removes DHPublicKey. x448/x25519 can be a public key +# but cannot be used in signing so they are allowed here. +CertificatePublicKeyTypes = typing.Union[ + dsa.DSAPublicKey, + rsa.RSAPublicKey, + ec.EllipticCurvePublicKey, + ed25519.Ed25519PublicKey, + ed448.Ed448PublicKey, + x25519.X25519PublicKey, + x448.X448PublicKey, +] +CERTIFICATE_PUBLIC_KEY_TYPES = CertificatePublicKeyTypes +utils.deprecated( + CERTIFICATE_PUBLIC_KEY_TYPES, + __name__, + "Use CertificatePublicKeyTypes instead", + utils.DeprecatedIn40, + name="CERTIFICATE_PUBLIC_KEY_TYPES", +) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/utils.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/utils.py new file mode 100644 index 00000000..826b9567 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/utils.py @@ -0,0 +1,24 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat.bindings._rust import asn1 +from cryptography.hazmat.primitives import hashes + +decode_dss_signature = asn1.decode_dss_signature +encode_dss_signature = asn1.encode_dss_signature + + +class Prehashed: + def __init__(self, algorithm: hashes.HashAlgorithm): + if not isinstance(algorithm, hashes.HashAlgorithm): + raise TypeError("Expected instance of HashAlgorithm.") + + self._algorithm = algorithm + self._digest_size = algorithm.digest_size + + @property + def digest_size(self) -> int: + return self._digest_size diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/x25519.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/x25519.py new file mode 100644 index 00000000..0cfa36e3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/x25519.py @@ -0,0 +1,109 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives import _serialization + + +class X25519PublicKey(metaclass=abc.ABCMeta): + @classmethod + def from_public_bytes(cls, data: bytes) -> X25519PublicKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x25519_supported(): + raise UnsupportedAlgorithm( + "X25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + + return rust_openssl.x25519.from_public_bytes(data) + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + The serialized bytes of the public key. + """ + + @abc.abstractmethod + def public_bytes_raw(self) -> bytes: + """ + The raw bytes of the public key. + Equivalent to public_bytes(Raw, Raw). + """ + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + +X25519PublicKey.register(rust_openssl.x25519.X25519PublicKey) + + +class X25519PrivateKey(metaclass=abc.ABCMeta): + @classmethod + def generate(cls) -> X25519PrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x25519_supported(): + raise UnsupportedAlgorithm( + "X25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + return rust_openssl.x25519.generate_key() + + @classmethod + def from_private_bytes(cls, data: bytes) -> X25519PrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x25519_supported(): + raise UnsupportedAlgorithm( + "X25519 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + + return rust_openssl.x25519.from_private_bytes(data) + + @abc.abstractmethod + def public_key(self) -> X25519PublicKey: + """ + Returns the public key associated with this private key + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + The serialized bytes of the private key. + """ + + @abc.abstractmethod + def private_bytes_raw(self) -> bytes: + """ + The raw bytes of the private key. + Equivalent to private_bytes(Raw, Raw, NoEncryption()). + """ + + @abc.abstractmethod + def exchange(self, peer_public_key: X25519PublicKey) -> bytes: + """ + Performs a key exchange operation using the provided peer's public key. + """ + + +X25519PrivateKey.register(rust_openssl.x25519.X25519PrivateKey) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/x448.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/x448.py new file mode 100644 index 00000000..86086ab4 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/asymmetric/x448.py @@ -0,0 +1,112 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives import _serialization + + +class X448PublicKey(metaclass=abc.ABCMeta): + @classmethod + def from_public_bytes(cls, data: bytes) -> X448PublicKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x448_supported(): + raise UnsupportedAlgorithm( + "X448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + + return rust_openssl.x448.from_public_bytes(data) + + @abc.abstractmethod + def public_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PublicFormat, + ) -> bytes: + """ + The serialized bytes of the public key. + """ + + @abc.abstractmethod + def public_bytes_raw(self) -> bytes: + """ + The raw bytes of the public key. + Equivalent to public_bytes(Raw, Raw). + """ + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + +if hasattr(rust_openssl, "x448"): + X448PublicKey.register(rust_openssl.x448.X448PublicKey) + + +class X448PrivateKey(metaclass=abc.ABCMeta): + @classmethod + def generate(cls) -> X448PrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x448_supported(): + raise UnsupportedAlgorithm( + "X448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + + return rust_openssl.x448.generate_key() + + @classmethod + def from_private_bytes(cls, data: bytes) -> X448PrivateKey: + from cryptography.hazmat.backends.openssl.backend import backend + + if not backend.x448_supported(): + raise UnsupportedAlgorithm( + "X448 is not supported by this version of OpenSSL.", + _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM, + ) + + return rust_openssl.x448.from_private_bytes(data) + + @abc.abstractmethod + def public_key(self) -> X448PublicKey: + """ + Returns the public key associated with this private key + """ + + @abc.abstractmethod + def private_bytes( + self, + encoding: _serialization.Encoding, + format: _serialization.PrivateFormat, + encryption_algorithm: _serialization.KeySerializationEncryption, + ) -> bytes: + """ + The serialized bytes of the private key. + """ + + @abc.abstractmethod + def private_bytes_raw(self) -> bytes: + """ + The raw bytes of the private key. + Equivalent to private_bytes(Raw, Raw, NoEncryption()). + """ + + @abc.abstractmethod + def exchange(self, peer_public_key: X448PublicKey) -> bytes: + """ + Performs a key exchange operation using the provided peer's public key. + """ + + +if hasattr(rust_openssl, "x448"): + X448PrivateKey.register(rust_openssl.x448.X448PrivateKey) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/__init__.py new file mode 100644 index 00000000..10c15d0f --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/__init__.py @@ -0,0 +1,27 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat.primitives._cipheralgorithm import ( + BlockCipherAlgorithm, + CipherAlgorithm, +) +from cryptography.hazmat.primitives.ciphers.base import ( + AEADCipherContext, + AEADDecryptionContext, + AEADEncryptionContext, + Cipher, + CipherContext, +) + +__all__ = [ + "AEADCipherContext", + "AEADDecryptionContext", + "AEADEncryptionContext", + "BlockCipherAlgorithm", + "Cipher", + "CipherAlgorithm", + "CipherContext", +] diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/aead.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/aead.py new file mode 100644 index 00000000..c8a582d7 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/aead.py @@ -0,0 +1,23 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat.bindings._rust import openssl as rust_openssl + +__all__ = [ + "AESCCM", + "AESGCM", + "AESGCMSIV", + "AESOCB3", + "AESSIV", + "ChaCha20Poly1305", +] + +AESGCM = rust_openssl.aead.AESGCM +ChaCha20Poly1305 = rust_openssl.aead.ChaCha20Poly1305 +AESCCM = rust_openssl.aead.AESCCM +AESSIV = rust_openssl.aead.AESSIV +AESOCB3 = rust_openssl.aead.AESOCB3 +AESGCMSIV = rust_openssl.aead.AESGCMSIV diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/algorithms.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/algorithms.py new file mode 100644 index 00000000..1051ba32 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/algorithms.py @@ -0,0 +1,177 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography import utils +from cryptography.hazmat.decrepit.ciphers.algorithms import ( + ARC4 as ARC4, +) +from cryptography.hazmat.decrepit.ciphers.algorithms import ( + CAST5 as CAST5, +) +from cryptography.hazmat.decrepit.ciphers.algorithms import ( + IDEA as IDEA, +) +from cryptography.hazmat.decrepit.ciphers.algorithms import ( + SEED as SEED, +) +from cryptography.hazmat.decrepit.ciphers.algorithms import ( + Blowfish as Blowfish, +) +from cryptography.hazmat.decrepit.ciphers.algorithms import ( + TripleDES as TripleDES, +) +from cryptography.hazmat.primitives._cipheralgorithm import _verify_key_size +from cryptography.hazmat.primitives.ciphers import ( + BlockCipherAlgorithm, + CipherAlgorithm, +) + + +class AES(BlockCipherAlgorithm): + name = "AES" + block_size = 128 + # 512 added to support AES-256-XTS, which uses 512-bit keys + key_sizes = frozenset([128, 192, 256, 512]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class AES128(BlockCipherAlgorithm): + name = "AES" + block_size = 128 + key_sizes = frozenset([128]) + key_size = 128 + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + +class AES256(BlockCipherAlgorithm): + name = "AES" + block_size = 128 + key_sizes = frozenset([256]) + key_size = 256 + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + +class Camellia(BlockCipherAlgorithm): + name = "camellia" + block_size = 128 + key_sizes = frozenset([128, 192, 256]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +utils.deprecated( + ARC4, + __name__, + "ARC4 has been moved to " + "cryptography.hazmat.decrepit.ciphers.algorithms.ARC4 and " + "will be removed from this module in 48.0.0.", + utils.DeprecatedIn43, + name="ARC4", +) + + +utils.deprecated( + TripleDES, + __name__, + "TripleDES has been moved to " + "cryptography.hazmat.decrepit.ciphers.algorithms.TripleDES and " + "will be removed from this module in 48.0.0.", + utils.DeprecatedIn43, + name="TripleDES", +) + +utils.deprecated( + Blowfish, + __name__, + "Blowfish has been moved to " + "cryptography.hazmat.decrepit.ciphers.algorithms.Blowfish and " + "will be removed from this module in 45.0.0.", + utils.DeprecatedIn37, + name="Blowfish", +) + + +utils.deprecated( + CAST5, + __name__, + "CAST5 has been moved to " + "cryptography.hazmat.decrepit.ciphers.algorithms.CAST5 and " + "will be removed from this module in 45.0.0.", + utils.DeprecatedIn37, + name="CAST5", +) + + +utils.deprecated( + IDEA, + __name__, + "IDEA has been moved to " + "cryptography.hazmat.decrepit.ciphers.algorithms.IDEA and " + "will be removed from this module in 45.0.0.", + utils.DeprecatedIn37, + name="IDEA", +) + + +utils.deprecated( + SEED, + __name__, + "SEED has been moved to " + "cryptography.hazmat.decrepit.ciphers.algorithms.SEED and " + "will be removed from this module in 45.0.0.", + utils.DeprecatedIn37, + name="SEED", +) + + +class ChaCha20(CipherAlgorithm): + name = "ChaCha20" + key_sizes = frozenset([256]) + + def __init__(self, key: bytes, nonce: bytes): + self.key = _verify_key_size(self, key) + utils._check_byteslike("nonce", nonce) + + if len(nonce) != 16: + raise ValueError("nonce must be 128-bits (16 bytes)") + + self._nonce = nonce + + @property + def nonce(self) -> bytes: + return self._nonce + + @property + def key_size(self) -> int: + return len(self.key) * 8 + + +class SM4(BlockCipherAlgorithm): + name = "SM4" + block_size = 128 + key_sizes = frozenset([128]) + + def __init__(self, key: bytes): + self.key = _verify_key_size(self, key) + + @property + def key_size(self) -> int: + return len(self.key) * 8 diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/base.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/base.py new file mode 100644 index 00000000..ebfa8052 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/base.py @@ -0,0 +1,145 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc +import typing + +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives._cipheralgorithm import CipherAlgorithm +from cryptography.hazmat.primitives.ciphers import modes + + +class CipherContext(metaclass=abc.ABCMeta): + @abc.abstractmethod + def update(self, data: bytes) -> bytes: + """ + Processes the provided bytes through the cipher and returns the results + as bytes. + """ + + @abc.abstractmethod + def update_into(self, data: bytes, buf: bytes) -> int: + """ + Processes the provided bytes and writes the resulting data into the + provided buffer. Returns the number of bytes written. + """ + + @abc.abstractmethod + def finalize(self) -> bytes: + """ + Returns the results of processing the final block as bytes. + """ + + @abc.abstractmethod + def reset_nonce(self, nonce: bytes) -> None: + """ + Resets the nonce for the cipher context to the provided value. + Raises an exception if it does not support reset or if the + provided nonce does not have a valid length. + """ + + +class AEADCipherContext(CipherContext, metaclass=abc.ABCMeta): + @abc.abstractmethod + def authenticate_additional_data(self, data: bytes) -> None: + """ + Authenticates the provided bytes. + """ + + +class AEADDecryptionContext(AEADCipherContext, metaclass=abc.ABCMeta): + @abc.abstractmethod + def finalize_with_tag(self, tag: bytes) -> bytes: + """ + Returns the results of processing the final block as bytes and allows + delayed passing of the authentication tag. + """ + + +class AEADEncryptionContext(AEADCipherContext, metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def tag(self) -> bytes: + """ + Returns tag bytes. This is only available after encryption is + finalized. + """ + + +Mode = typing.TypeVar( + "Mode", bound=typing.Optional[modes.Mode], covariant=True +) + + +class Cipher(typing.Generic[Mode]): + def __init__( + self, + algorithm: CipherAlgorithm, + mode: Mode, + backend: typing.Any = None, + ) -> None: + if not isinstance(algorithm, CipherAlgorithm): + raise TypeError("Expected interface of CipherAlgorithm.") + + if mode is not None: + # mypy needs this assert to narrow the type from our generic + # type. Maybe it won't some time in the future. + assert isinstance(mode, modes.Mode) + mode.validate_for_algorithm(algorithm) + + self.algorithm = algorithm + self.mode = mode + + @typing.overload + def encryptor( + self: Cipher[modes.ModeWithAuthenticationTag], + ) -> AEADEncryptionContext: ... + + @typing.overload + def encryptor( + self: _CIPHER_TYPE, + ) -> CipherContext: ... + + def encryptor(self): + if isinstance(self.mode, modes.ModeWithAuthenticationTag): + if self.mode.tag is not None: + raise ValueError( + "Authentication tag must be None when encrypting." + ) + + return rust_openssl.ciphers.create_encryption_ctx( + self.algorithm, self.mode + ) + + @typing.overload + def decryptor( + self: Cipher[modes.ModeWithAuthenticationTag], + ) -> AEADDecryptionContext: ... + + @typing.overload + def decryptor( + self: _CIPHER_TYPE, + ) -> CipherContext: ... + + def decryptor(self): + return rust_openssl.ciphers.create_decryption_ctx( + self.algorithm, self.mode + ) + + +_CIPHER_TYPE = Cipher[ + typing.Union[ + modes.ModeWithNonce, + modes.ModeWithTweak, + None, + modes.ECB, + modes.ModeWithInitializationVector, + ] +] + +CipherContext.register(rust_openssl.ciphers.CipherContext) +AEADEncryptionContext.register(rust_openssl.ciphers.AEADEncryptionContext) +AEADDecryptionContext.register(rust_openssl.ciphers.AEADDecryptionContext) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/modes.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/modes.py new file mode 100644 index 00000000..1dd2cc1e --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/ciphers/modes.py @@ -0,0 +1,268 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + +from cryptography import utils +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.primitives._cipheralgorithm import ( + BlockCipherAlgorithm, + CipherAlgorithm, +) +from cryptography.hazmat.primitives.ciphers import algorithms + + +class Mode(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def name(self) -> str: + """ + A string naming this mode (e.g. "ECB", "CBC"). + """ + + @abc.abstractmethod + def validate_for_algorithm(self, algorithm: CipherAlgorithm) -> None: + """ + Checks that all the necessary invariants of this (mode, algorithm) + combination are met. + """ + + +class ModeWithInitializationVector(Mode, metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def initialization_vector(self) -> bytes: + """ + The value of the initialization vector for this mode as bytes. + """ + + +class ModeWithTweak(Mode, metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def tweak(self) -> bytes: + """ + The value of the tweak for this mode as bytes. + """ + + +class ModeWithNonce(Mode, metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def nonce(self) -> bytes: + """ + The value of the nonce for this mode as bytes. + """ + + +class ModeWithAuthenticationTag(Mode, metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def tag(self) -> bytes | None: + """ + The value of the tag supplied to the constructor of this mode. + """ + + +def _check_aes_key_length(self: Mode, algorithm: CipherAlgorithm) -> None: + if algorithm.key_size > 256 and algorithm.name == "AES": + raise ValueError( + "Only 128, 192, and 256 bit keys are allowed for this AES mode" + ) + + +def _check_iv_length( + self: ModeWithInitializationVector, algorithm: BlockCipherAlgorithm +) -> None: + iv_len = len(self.initialization_vector) + if iv_len * 8 != algorithm.block_size: + raise ValueError(f"Invalid IV size ({iv_len}) for {self.name}.") + + +def _check_nonce_length( + nonce: bytes, name: str, algorithm: CipherAlgorithm +) -> None: + if not isinstance(algorithm, BlockCipherAlgorithm): + raise UnsupportedAlgorithm( + f"{name} requires a block cipher algorithm", + _Reasons.UNSUPPORTED_CIPHER, + ) + if len(nonce) * 8 != algorithm.block_size: + raise ValueError(f"Invalid nonce size ({len(nonce)}) for {name}.") + + +def _check_iv_and_key_length( + self: ModeWithInitializationVector, algorithm: CipherAlgorithm +) -> None: + if not isinstance(algorithm, BlockCipherAlgorithm): + raise UnsupportedAlgorithm( + f"{self} requires a block cipher algorithm", + _Reasons.UNSUPPORTED_CIPHER, + ) + _check_aes_key_length(self, algorithm) + _check_iv_length(self, algorithm) + + +class CBC(ModeWithInitializationVector): + name = "CBC" + + def __init__(self, initialization_vector: bytes): + utils._check_byteslike("initialization_vector", initialization_vector) + self._initialization_vector = initialization_vector + + @property + def initialization_vector(self) -> bytes: + return self._initialization_vector + + validate_for_algorithm = _check_iv_and_key_length + + +class XTS(ModeWithTweak): + name = "XTS" + + def __init__(self, tweak: bytes): + utils._check_byteslike("tweak", tweak) + + if len(tweak) != 16: + raise ValueError("tweak must be 128-bits (16 bytes)") + + self._tweak = tweak + + @property + def tweak(self) -> bytes: + return self._tweak + + def validate_for_algorithm(self, algorithm: CipherAlgorithm) -> None: + if isinstance(algorithm, (algorithms.AES128, algorithms.AES256)): + raise TypeError( + "The AES128 and AES256 classes do not support XTS, please use " + "the standard AES class instead." + ) + + if algorithm.key_size not in (256, 512): + raise ValueError( + "The XTS specification requires a 256-bit key for AES-128-XTS" + " and 512-bit key for AES-256-XTS" + ) + + +class ECB(Mode): + name = "ECB" + + validate_for_algorithm = _check_aes_key_length + + +class OFB(ModeWithInitializationVector): + name = "OFB" + + def __init__(self, initialization_vector: bytes): + utils._check_byteslike("initialization_vector", initialization_vector) + self._initialization_vector = initialization_vector + + @property + def initialization_vector(self) -> bytes: + return self._initialization_vector + + validate_for_algorithm = _check_iv_and_key_length + + +class CFB(ModeWithInitializationVector): + name = "CFB" + + def __init__(self, initialization_vector: bytes): + utils._check_byteslike("initialization_vector", initialization_vector) + self._initialization_vector = initialization_vector + + @property + def initialization_vector(self) -> bytes: + return self._initialization_vector + + validate_for_algorithm = _check_iv_and_key_length + + +class CFB8(ModeWithInitializationVector): + name = "CFB8" + + def __init__(self, initialization_vector: bytes): + utils._check_byteslike("initialization_vector", initialization_vector) + self._initialization_vector = initialization_vector + + @property + def initialization_vector(self) -> bytes: + return self._initialization_vector + + validate_for_algorithm = _check_iv_and_key_length + + +class CTR(ModeWithNonce): + name = "CTR" + + def __init__(self, nonce: bytes): + utils._check_byteslike("nonce", nonce) + self._nonce = nonce + + @property + def nonce(self) -> bytes: + return self._nonce + + def validate_for_algorithm(self, algorithm: CipherAlgorithm) -> None: + _check_aes_key_length(self, algorithm) + _check_nonce_length(self.nonce, self.name, algorithm) + + +class GCM(ModeWithInitializationVector, ModeWithAuthenticationTag): + name = "GCM" + _MAX_ENCRYPTED_BYTES = (2**39 - 256) // 8 + _MAX_AAD_BYTES = (2**64) // 8 + + def __init__( + self, + initialization_vector: bytes, + tag: bytes | None = None, + min_tag_length: int = 16, + ): + # OpenSSL 3.0.0 constrains GCM IVs to [64, 1024] bits inclusive + # This is a sane limit anyway so we'll enforce it here. + utils._check_byteslike("initialization_vector", initialization_vector) + if len(initialization_vector) < 8 or len(initialization_vector) > 128: + raise ValueError( + "initialization_vector must be between 8 and 128 bytes (64 " + "and 1024 bits)." + ) + self._initialization_vector = initialization_vector + if tag is not None: + utils._check_bytes("tag", tag) + if min_tag_length < 4: + raise ValueError("min_tag_length must be >= 4") + if len(tag) < min_tag_length: + raise ValueError( + f"Authentication tag must be {min_tag_length} bytes or " + "longer." + ) + self._tag = tag + self._min_tag_length = min_tag_length + + @property + def tag(self) -> bytes | None: + return self._tag + + @property + def initialization_vector(self) -> bytes: + return self._initialization_vector + + def validate_for_algorithm(self, algorithm: CipherAlgorithm) -> None: + _check_aes_key_length(self, algorithm) + if not isinstance(algorithm, BlockCipherAlgorithm): + raise UnsupportedAlgorithm( + "GCM requires a block cipher algorithm", + _Reasons.UNSUPPORTED_CIPHER, + ) + block_size_bytes = algorithm.block_size // 8 + if self._tag is not None and len(self._tag) > block_size_bytes: + raise ValueError( + f"Authentication tag cannot be more than {block_size_bytes} " + "bytes." + ) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/cmac.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/cmac.py new file mode 100644 index 00000000..2c67ce22 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/cmac.py @@ -0,0 +1,10 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat.bindings._rust import openssl as rust_openssl + +__all__ = ["CMAC"] +CMAC = rust_openssl.cmac.CMAC diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/constant_time.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/constant_time.py new file mode 100644 index 00000000..3975c714 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/constant_time.py @@ -0,0 +1,14 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import hmac + + +def bytes_eq(a: bytes, b: bytes) -> bool: + if not isinstance(a, bytes) or not isinstance(b, bytes): + raise TypeError("a and b must be bytes.") + + return hmac.compare_digest(a, b) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/hashes.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/hashes.py new file mode 100644 index 00000000..b819e399 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/hashes.py @@ -0,0 +1,242 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + +from cryptography.hazmat.bindings._rust import openssl as rust_openssl + +__all__ = [ + "MD5", + "SHA1", + "SHA3_224", + "SHA3_256", + "SHA3_384", + "SHA3_512", + "SHA224", + "SHA256", + "SHA384", + "SHA512", + "SHA512_224", + "SHA512_256", + "SHAKE128", + "SHAKE256", + "SM3", + "BLAKE2b", + "BLAKE2s", + "ExtendableOutputFunction", + "Hash", + "HashAlgorithm", + "HashContext", +] + + +class HashAlgorithm(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def name(self) -> str: + """ + A string naming this algorithm (e.g. "sha256", "md5"). + """ + + @property + @abc.abstractmethod + def digest_size(self) -> int: + """ + The size of the resulting digest in bytes. + """ + + @property + @abc.abstractmethod + def block_size(self) -> int | None: + """ + The internal block size of the hash function, or None if the hash + function does not use blocks internally (e.g. SHA3). + """ + + +class HashContext(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def algorithm(self) -> HashAlgorithm: + """ + A HashAlgorithm that will be used by this context. + """ + + @abc.abstractmethod + def update(self, data: bytes) -> None: + """ + Processes the provided bytes through the hash. + """ + + @abc.abstractmethod + def finalize(self) -> bytes: + """ + Finalizes the hash context and returns the hash digest as bytes. + """ + + @abc.abstractmethod + def copy(self) -> HashContext: + """ + Return a HashContext that is a copy of the current context. + """ + + +Hash = rust_openssl.hashes.Hash +HashContext.register(Hash) + + +class ExtendableOutputFunction(metaclass=abc.ABCMeta): + """ + An interface for extendable output functions. + """ + + +class SHA1(HashAlgorithm): + name = "sha1" + digest_size = 20 + block_size = 64 + + +class SHA512_224(HashAlgorithm): # noqa: N801 + name = "sha512-224" + digest_size = 28 + block_size = 128 + + +class SHA512_256(HashAlgorithm): # noqa: N801 + name = "sha512-256" + digest_size = 32 + block_size = 128 + + +class SHA224(HashAlgorithm): + name = "sha224" + digest_size = 28 + block_size = 64 + + +class SHA256(HashAlgorithm): + name = "sha256" + digest_size = 32 + block_size = 64 + + +class SHA384(HashAlgorithm): + name = "sha384" + digest_size = 48 + block_size = 128 + + +class SHA512(HashAlgorithm): + name = "sha512" + digest_size = 64 + block_size = 128 + + +class SHA3_224(HashAlgorithm): # noqa: N801 + name = "sha3-224" + digest_size = 28 + block_size = None + + +class SHA3_256(HashAlgorithm): # noqa: N801 + name = "sha3-256" + digest_size = 32 + block_size = None + + +class SHA3_384(HashAlgorithm): # noqa: N801 + name = "sha3-384" + digest_size = 48 + block_size = None + + +class SHA3_512(HashAlgorithm): # noqa: N801 + name = "sha3-512" + digest_size = 64 + block_size = None + + +class SHAKE128(HashAlgorithm, ExtendableOutputFunction): + name = "shake128" + block_size = None + + def __init__(self, digest_size: int): + if not isinstance(digest_size, int): + raise TypeError("digest_size must be an integer") + + if digest_size < 1: + raise ValueError("digest_size must be a positive integer") + + self._digest_size = digest_size + + @property + def digest_size(self) -> int: + return self._digest_size + + +class SHAKE256(HashAlgorithm, ExtendableOutputFunction): + name = "shake256" + block_size = None + + def __init__(self, digest_size: int): + if not isinstance(digest_size, int): + raise TypeError("digest_size must be an integer") + + if digest_size < 1: + raise ValueError("digest_size must be a positive integer") + + self._digest_size = digest_size + + @property + def digest_size(self) -> int: + return self._digest_size + + +class MD5(HashAlgorithm): + name = "md5" + digest_size = 16 + block_size = 64 + + +class BLAKE2b(HashAlgorithm): + name = "blake2b" + _max_digest_size = 64 + _min_digest_size = 1 + block_size = 128 + + def __init__(self, digest_size: int): + if digest_size != 64: + raise ValueError("Digest size must be 64") + + self._digest_size = digest_size + + @property + def digest_size(self) -> int: + return self._digest_size + + +class BLAKE2s(HashAlgorithm): + name = "blake2s" + block_size = 64 + _max_digest_size = 32 + _min_digest_size = 1 + + def __init__(self, digest_size: int): + if digest_size != 32: + raise ValueError("Digest size must be 32") + + self._digest_size = digest_size + + @property + def digest_size(self) -> int: + return self._digest_size + + +class SM3(HashAlgorithm): + name = "sm3" + digest_size = 32 + block_size = 64 diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/hmac.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/hmac.py new file mode 100644 index 00000000..a9442d59 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/hmac.py @@ -0,0 +1,13 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives import hashes + +__all__ = ["HMAC"] + +HMAC = rust_openssl.hmac.HMAC +hashes.HashContext.register(HMAC) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/__init__.py new file mode 100644 index 00000000..79bb459f --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/__init__.py @@ -0,0 +1,23 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc + + +class KeyDerivationFunction(metaclass=abc.ABCMeta): + @abc.abstractmethod + def derive(self, key_material: bytes) -> bytes: + """ + Deterministically generates and returns a new key based on the existing + key material. + """ + + @abc.abstractmethod + def verify(self, key_material: bytes, expected_key: bytes) -> None: + """ + Checks whether the key generated by the key material matches the + expected derived key. Raises an exception if they do not match. + """ diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/concatkdf.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/concatkdf.py new file mode 100644 index 00000000..96d9d4c0 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/concatkdf.py @@ -0,0 +1,124 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import typing + +from cryptography import utils +from cryptography.exceptions import AlreadyFinalized, InvalidKey +from cryptography.hazmat.primitives import constant_time, hashes, hmac +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + + +def _int_to_u32be(n: int) -> bytes: + return n.to_bytes(length=4, byteorder="big") + + +def _common_args_checks( + algorithm: hashes.HashAlgorithm, + length: int, + otherinfo: bytes | None, +) -> None: + max_length = algorithm.digest_size * (2**32 - 1) + if length > max_length: + raise ValueError(f"Cannot derive keys larger than {max_length} bits.") + if otherinfo is not None: + utils._check_bytes("otherinfo", otherinfo) + + +def _concatkdf_derive( + key_material: bytes, + length: int, + auxfn: typing.Callable[[], hashes.HashContext], + otherinfo: bytes, +) -> bytes: + utils._check_byteslike("key_material", key_material) + output = [b""] + outlen = 0 + counter = 1 + + while length > outlen: + h = auxfn() + h.update(_int_to_u32be(counter)) + h.update(key_material) + h.update(otherinfo) + output.append(h.finalize()) + outlen += len(output[-1]) + counter += 1 + + return b"".join(output)[:length] + + +class ConcatKDFHash(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + otherinfo: bytes | None, + backend: typing.Any = None, + ): + _common_args_checks(algorithm, length, otherinfo) + self._algorithm = algorithm + self._length = length + self._otherinfo: bytes = otherinfo if otherinfo is not None else b"" + + self._used = False + + def _hash(self) -> hashes.Hash: + return hashes.Hash(self._algorithm) + + def derive(self, key_material: bytes) -> bytes: + if self._used: + raise AlreadyFinalized + self._used = True + return _concatkdf_derive( + key_material, self._length, self._hash, self._otherinfo + ) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey + + +class ConcatKDFHMAC(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + salt: bytes | None, + otherinfo: bytes | None, + backend: typing.Any = None, + ): + _common_args_checks(algorithm, length, otherinfo) + self._algorithm = algorithm + self._length = length + self._otherinfo: bytes = otherinfo if otherinfo is not None else b"" + + if algorithm.block_size is None: + raise TypeError(f"{algorithm.name} is unsupported for ConcatKDF") + + if salt is None: + salt = b"\x00" * algorithm.block_size + else: + utils._check_bytes("salt", salt) + + self._salt = salt + + self._used = False + + def _hmac(self) -> hmac.HMAC: + return hmac.HMAC(self._salt, self._algorithm) + + def derive(self, key_material: bytes) -> bytes: + if self._used: + raise AlreadyFinalized + self._used = True + return _concatkdf_derive( + key_material, self._length, self._hmac, self._otherinfo + ) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/hkdf.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/hkdf.py new file mode 100644 index 00000000..ee562d2f --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/hkdf.py @@ -0,0 +1,101 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import typing + +from cryptography import utils +from cryptography.exceptions import AlreadyFinalized, InvalidKey +from cryptography.hazmat.primitives import constant_time, hashes, hmac +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + + +class HKDF(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + salt: bytes | None, + info: bytes | None, + backend: typing.Any = None, + ): + self._algorithm = algorithm + + if salt is None: + salt = b"\x00" * self._algorithm.digest_size + else: + utils._check_bytes("salt", salt) + + self._salt = salt + + self._hkdf_expand = HKDFExpand(self._algorithm, length, info) + + def _extract(self, key_material: bytes) -> bytes: + h = hmac.HMAC(self._salt, self._algorithm) + h.update(key_material) + return h.finalize() + + def derive(self, key_material: bytes) -> bytes: + utils._check_byteslike("key_material", key_material) + return self._hkdf_expand.derive(self._extract(key_material)) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey + + +class HKDFExpand(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + info: bytes | None, + backend: typing.Any = None, + ): + self._algorithm = algorithm + + max_length = 255 * algorithm.digest_size + + if length > max_length: + raise ValueError( + f"Cannot derive keys larger than {max_length} octets." + ) + + self._length = length + + if info is None: + info = b"" + else: + utils._check_bytes("info", info) + + self._info = info + + self._used = False + + def _expand(self, key_material: bytes) -> bytes: + output = [b""] + counter = 1 + + while self._algorithm.digest_size * (len(output) - 1) < self._length: + h = hmac.HMAC(key_material, self._algorithm) + h.update(output[-1]) + h.update(self._info) + h.update(bytes([counter])) + output.append(h.finalize()) + counter += 1 + + return b"".join(output)[: self._length] + + def derive(self, key_material: bytes) -> bytes: + utils._check_byteslike("key_material", key_material) + if self._used: + raise AlreadyFinalized + + self._used = True + return self._expand(key_material) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/kbkdf.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/kbkdf.py new file mode 100644 index 00000000..802b484c --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/kbkdf.py @@ -0,0 +1,302 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, + InvalidKey, + UnsupportedAlgorithm, + _Reasons, +) +from cryptography.hazmat.primitives import ( + ciphers, + cmac, + constant_time, + hashes, + hmac, +) +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + + +class Mode(utils.Enum): + CounterMode = "ctr" + + +class CounterLocation(utils.Enum): + BeforeFixed = "before_fixed" + AfterFixed = "after_fixed" + MiddleFixed = "middle_fixed" + + +class _KBKDFDeriver: + def __init__( + self, + prf: typing.Callable, + mode: Mode, + length: int, + rlen: int, + llen: int | None, + location: CounterLocation, + break_location: int | None, + label: bytes | None, + context: bytes | None, + fixed: bytes | None, + ): + assert callable(prf) + + if not isinstance(mode, Mode): + raise TypeError("mode must be of type Mode") + + if not isinstance(location, CounterLocation): + raise TypeError("location must be of type CounterLocation") + + if break_location is None and location is CounterLocation.MiddleFixed: + raise ValueError("Please specify a break_location") + + if ( + break_location is not None + and location != CounterLocation.MiddleFixed + ): + raise ValueError( + "break_location is ignored when location is not" + " CounterLocation.MiddleFixed" + ) + + if break_location is not None and not isinstance(break_location, int): + raise TypeError("break_location must be an integer") + + if break_location is not None and break_location < 0: + raise ValueError("break_location must be a positive integer") + + if (label or context) and fixed: + raise ValueError( + "When supplying fixed data, label and context are ignored." + ) + + if rlen is None or not self._valid_byte_length(rlen): + raise ValueError("rlen must be between 1 and 4") + + if llen is None and fixed is None: + raise ValueError("Please specify an llen") + + if llen is not None and not isinstance(llen, int): + raise TypeError("llen must be an integer") + + if llen == 0: + raise ValueError("llen must be non-zero") + + if label is None: + label = b"" + + if context is None: + context = b"" + + utils._check_bytes("label", label) + utils._check_bytes("context", context) + self._prf = prf + self._mode = mode + self._length = length + self._rlen = rlen + self._llen = llen + self._location = location + self._break_location = break_location + self._label = label + self._context = context + self._used = False + self._fixed_data = fixed + + @staticmethod + def _valid_byte_length(value: int) -> bool: + if not isinstance(value, int): + raise TypeError("value must be of type int") + + value_bin = utils.int_to_bytes(1, value) + if not 1 <= len(value_bin) <= 4: + return False + return True + + def derive(self, key_material: bytes, prf_output_size: int) -> bytes: + if self._used: + raise AlreadyFinalized + + utils._check_byteslike("key_material", key_material) + self._used = True + + # inverse floor division (equivalent to ceiling) + rounds = -(-self._length // prf_output_size) + + output = [b""] + + # For counter mode, the number of iterations shall not be + # larger than 2^r-1, where r <= 32 is the binary length of the counter + # This ensures that the counter values used as an input to the + # PRF will not repeat during a particular call to the KDF function. + r_bin = utils.int_to_bytes(1, self._rlen) + if rounds > pow(2, len(r_bin) * 8) - 1: + raise ValueError("There are too many iterations.") + + fixed = self._generate_fixed_input() + + if self._location == CounterLocation.BeforeFixed: + data_before_ctr = b"" + data_after_ctr = fixed + elif self._location == CounterLocation.AfterFixed: + data_before_ctr = fixed + data_after_ctr = b"" + else: + if isinstance( + self._break_location, int + ) and self._break_location > len(fixed): + raise ValueError("break_location offset > len(fixed)") + data_before_ctr = fixed[: self._break_location] + data_after_ctr = fixed[self._break_location :] + + for i in range(1, rounds + 1): + h = self._prf(key_material) + + counter = utils.int_to_bytes(i, self._rlen) + input_data = data_before_ctr + counter + data_after_ctr + + h.update(input_data) + + output.append(h.finalize()) + + return b"".join(output)[: self._length] + + def _generate_fixed_input(self) -> bytes: + if self._fixed_data and isinstance(self._fixed_data, bytes): + return self._fixed_data + + l_val = utils.int_to_bytes(self._length * 8, self._llen) + + return b"".join([self._label, b"\x00", self._context, l_val]) + + +class KBKDFHMAC(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + mode: Mode, + length: int, + rlen: int, + llen: int | None, + location: CounterLocation, + label: bytes | None, + context: bytes | None, + fixed: bytes | None, + backend: typing.Any = None, + *, + break_location: int | None = None, + ): + if not isinstance(algorithm, hashes.HashAlgorithm): + raise UnsupportedAlgorithm( + "Algorithm supplied is not a supported hash algorithm.", + _Reasons.UNSUPPORTED_HASH, + ) + + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + if not ossl.hmac_supported(algorithm): + raise UnsupportedAlgorithm( + "Algorithm supplied is not a supported hmac algorithm.", + _Reasons.UNSUPPORTED_HASH, + ) + + self._algorithm = algorithm + + self._deriver = _KBKDFDeriver( + self._prf, + mode, + length, + rlen, + llen, + location, + break_location, + label, + context, + fixed, + ) + + def _prf(self, key_material: bytes) -> hmac.HMAC: + return hmac.HMAC(key_material, self._algorithm) + + def derive(self, key_material: bytes) -> bytes: + return self._deriver.derive(key_material, self._algorithm.digest_size) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey + + +class KBKDFCMAC(KeyDerivationFunction): + def __init__( + self, + algorithm, + mode: Mode, + length: int, + rlen: int, + llen: int | None, + location: CounterLocation, + label: bytes | None, + context: bytes | None, + fixed: bytes | None, + backend: typing.Any = None, + *, + break_location: int | None = None, + ): + if not issubclass( + algorithm, ciphers.BlockCipherAlgorithm + ) or not issubclass(algorithm, ciphers.CipherAlgorithm): + raise UnsupportedAlgorithm( + "Algorithm supplied is not a supported cipher algorithm.", + _Reasons.UNSUPPORTED_CIPHER, + ) + + self._algorithm = algorithm + self._cipher: ciphers.BlockCipherAlgorithm | None = None + + self._deriver = _KBKDFDeriver( + self._prf, + mode, + length, + rlen, + llen, + location, + break_location, + label, + context, + fixed, + ) + + def _prf(self, _: bytes) -> cmac.CMAC: + assert self._cipher is not None + + return cmac.CMAC(self._cipher) + + def derive(self, key_material: bytes) -> bytes: + self._cipher = self._algorithm(key_material) + + assert self._cipher is not None + + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + if not ossl.cmac_algorithm_supported(self._cipher): + raise UnsupportedAlgorithm( + "Algorithm supplied is not a supported cipher algorithm.", + _Reasons.UNSUPPORTED_CIPHER, + ) + + return self._deriver.derive(key_material, self._cipher.block_size // 8) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/pbkdf2.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/pbkdf2.py new file mode 100644 index 00000000..82689ebc --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/pbkdf2.py @@ -0,0 +1,62 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, + InvalidKey, + UnsupportedAlgorithm, + _Reasons, +) +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives import constant_time, hashes +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + + +class PBKDF2HMAC(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + salt: bytes, + iterations: int, + backend: typing.Any = None, + ): + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + if not ossl.pbkdf2_hmac_supported(algorithm): + raise UnsupportedAlgorithm( + f"{algorithm.name} is not supported for PBKDF2.", + _Reasons.UNSUPPORTED_HASH, + ) + self._used = False + self._algorithm = algorithm + self._length = length + utils._check_bytes("salt", salt) + self._salt = salt + self._iterations = iterations + + def derive(self, key_material: bytes) -> bytes: + if self._used: + raise AlreadyFinalized("PBKDF2 instances can only be used once.") + self._used = True + + return rust_openssl.kdf.derive_pbkdf2_hmac( + key_material, + self._algorithm, + self._salt, + self._iterations, + self._length, + ) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + derived_key = self.derive(key_material) + if not constant_time.bytes_eq(derived_key, expected_key): + raise InvalidKey("Keys do not match.") diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/scrypt.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/scrypt.py new file mode 100644 index 00000000..05a4f675 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/scrypt.py @@ -0,0 +1,80 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import sys +import typing + +from cryptography import utils +from cryptography.exceptions import ( + AlreadyFinalized, + InvalidKey, + UnsupportedAlgorithm, +) +from cryptography.hazmat.bindings._rust import openssl as rust_openssl +from cryptography.hazmat.primitives import constant_time +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + +# This is used by the scrypt tests to skip tests that require more memory +# than the MEM_LIMIT +_MEM_LIMIT = sys.maxsize // 2 + + +class Scrypt(KeyDerivationFunction): + def __init__( + self, + salt: bytes, + length: int, + n: int, + r: int, + p: int, + backend: typing.Any = None, + ): + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + if not ossl.scrypt_supported(): + raise UnsupportedAlgorithm( + "This version of OpenSSL does not support scrypt" + ) + self._length = length + utils._check_bytes("salt", salt) + if n < 2 or (n & (n - 1)) != 0: + raise ValueError("n must be greater than 1 and be a power of 2.") + + if r < 1: + raise ValueError("r must be greater than or equal to 1.") + + if p < 1: + raise ValueError("p must be greater than or equal to 1.") + + self._used = False + self._salt = salt + self._n = n + self._r = r + self._p = p + + def derive(self, key_material: bytes) -> bytes: + if self._used: + raise AlreadyFinalized("Scrypt instances can only be used once.") + self._used = True + + utils._check_byteslike("key_material", key_material) + + return rust_openssl.kdf.derive_scrypt( + key_material, + self._salt, + self._n, + self._r, + self._p, + _MEM_LIMIT, + self._length, + ) + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + derived_key = self.derive(key_material) + if not constant_time.bytes_eq(derived_key, expected_key): + raise InvalidKey("Keys do not match.") diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/x963kdf.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/x963kdf.py new file mode 100644 index 00000000..6e38366a --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/kdf/x963kdf.py @@ -0,0 +1,61 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import typing + +from cryptography import utils +from cryptography.exceptions import AlreadyFinalized, InvalidKey +from cryptography.hazmat.primitives import constant_time, hashes +from cryptography.hazmat.primitives.kdf import KeyDerivationFunction + + +def _int_to_u32be(n: int) -> bytes: + return n.to_bytes(length=4, byteorder="big") + + +class X963KDF(KeyDerivationFunction): + def __init__( + self, + algorithm: hashes.HashAlgorithm, + length: int, + sharedinfo: bytes | None, + backend: typing.Any = None, + ): + max_len = algorithm.digest_size * (2**32 - 1) + if length > max_len: + raise ValueError(f"Cannot derive keys larger than {max_len} bits.") + if sharedinfo is not None: + utils._check_bytes("sharedinfo", sharedinfo) + + self._algorithm = algorithm + self._length = length + self._sharedinfo = sharedinfo + self._used = False + + def derive(self, key_material: bytes) -> bytes: + if self._used: + raise AlreadyFinalized + self._used = True + utils._check_byteslike("key_material", key_material) + output = [b""] + outlen = 0 + counter = 1 + + while self._length > outlen: + h = hashes.Hash(self._algorithm) + h.update(key_material) + h.update(_int_to_u32be(counter)) + if self._sharedinfo is not None: + h.update(self._sharedinfo) + output.append(h.finalize()) + outlen += len(output[-1]) + counter += 1 + + return b"".join(output)[: self._length] + + def verify(self, key_material: bytes, expected_key: bytes) -> None: + if not constant_time.bytes_eq(self.derive(key_material), expected_key): + raise InvalidKey diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/keywrap.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/keywrap.py new file mode 100644 index 00000000..b93d87d3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/keywrap.py @@ -0,0 +1,177 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import typing + +from cryptography.hazmat.primitives.ciphers import Cipher +from cryptography.hazmat.primitives.ciphers.algorithms import AES +from cryptography.hazmat.primitives.ciphers.modes import ECB +from cryptography.hazmat.primitives.constant_time import bytes_eq + + +def _wrap_core( + wrapping_key: bytes, + a: bytes, + r: list[bytes], +) -> bytes: + # RFC 3394 Key Wrap - 2.2.1 (index method) + encryptor = Cipher(AES(wrapping_key), ECB()).encryptor() + n = len(r) + for j in range(6): + for i in range(n): + # every encryption operation is a discrete 16 byte chunk (because + # AES has a 128-bit block size) and since we're using ECB it is + # safe to reuse the encryptor for the entire operation + b = encryptor.update(a + r[i]) + a = ( + int.from_bytes(b[:8], byteorder="big") ^ ((n * j) + i + 1) + ).to_bytes(length=8, byteorder="big") + r[i] = b[-8:] + + assert encryptor.finalize() == b"" + + return a + b"".join(r) + + +def aes_key_wrap( + wrapping_key: bytes, + key_to_wrap: bytes, + backend: typing.Any = None, +) -> bytes: + if len(wrapping_key) not in [16, 24, 32]: + raise ValueError("The wrapping key must be a valid AES key length") + + if len(key_to_wrap) < 16: + raise ValueError("The key to wrap must be at least 16 bytes") + + if len(key_to_wrap) % 8 != 0: + raise ValueError("The key to wrap must be a multiple of 8 bytes") + + a = b"\xa6\xa6\xa6\xa6\xa6\xa6\xa6\xa6" + r = [key_to_wrap[i : i + 8] for i in range(0, len(key_to_wrap), 8)] + return _wrap_core(wrapping_key, a, r) + + +def _unwrap_core( + wrapping_key: bytes, + a: bytes, + r: list[bytes], +) -> tuple[bytes, list[bytes]]: + # Implement RFC 3394 Key Unwrap - 2.2.2 (index method) + decryptor = Cipher(AES(wrapping_key), ECB()).decryptor() + n = len(r) + for j in reversed(range(6)): + for i in reversed(range(n)): + atr = ( + int.from_bytes(a, byteorder="big") ^ ((n * j) + i + 1) + ).to_bytes(length=8, byteorder="big") + r[i] + # every decryption operation is a discrete 16 byte chunk so + # it is safe to reuse the decryptor for the entire operation + b = decryptor.update(atr) + a = b[:8] + r[i] = b[-8:] + + assert decryptor.finalize() == b"" + return a, r + + +def aes_key_wrap_with_padding( + wrapping_key: bytes, + key_to_wrap: bytes, + backend: typing.Any = None, +) -> bytes: + if len(wrapping_key) not in [16, 24, 32]: + raise ValueError("The wrapping key must be a valid AES key length") + + aiv = b"\xa6\x59\x59\xa6" + len(key_to_wrap).to_bytes( + length=4, byteorder="big" + ) + # pad the key to wrap if necessary + pad = (8 - (len(key_to_wrap) % 8)) % 8 + key_to_wrap = key_to_wrap + b"\x00" * pad + if len(key_to_wrap) == 8: + # RFC 5649 - 4.1 - exactly 8 octets after padding + encryptor = Cipher(AES(wrapping_key), ECB()).encryptor() + b = encryptor.update(aiv + key_to_wrap) + assert encryptor.finalize() == b"" + return b + else: + r = [key_to_wrap[i : i + 8] for i in range(0, len(key_to_wrap), 8)] + return _wrap_core(wrapping_key, aiv, r) + + +def aes_key_unwrap_with_padding( + wrapping_key: bytes, + wrapped_key: bytes, + backend: typing.Any = None, +) -> bytes: + if len(wrapped_key) < 16: + raise InvalidUnwrap("Must be at least 16 bytes") + + if len(wrapping_key) not in [16, 24, 32]: + raise ValueError("The wrapping key must be a valid AES key length") + + if len(wrapped_key) == 16: + # RFC 5649 - 4.2 - exactly two 64-bit blocks + decryptor = Cipher(AES(wrapping_key), ECB()).decryptor() + out = decryptor.update(wrapped_key) + assert decryptor.finalize() == b"" + a = out[:8] + data = out[8:] + n = 1 + else: + r = [wrapped_key[i : i + 8] for i in range(0, len(wrapped_key), 8)] + encrypted_aiv = r.pop(0) + n = len(r) + a, r = _unwrap_core(wrapping_key, encrypted_aiv, r) + data = b"".join(r) + + # 1) Check that MSB(32,A) = A65959A6. + # 2) Check that 8*(n-1) < LSB(32,A) <= 8*n. If so, let + # MLI = LSB(32,A). + # 3) Let b = (8*n)-MLI, and then check that the rightmost b octets of + # the output data are zero. + mli = int.from_bytes(a[4:], byteorder="big") + b = (8 * n) - mli + if ( + not bytes_eq(a[:4], b"\xa6\x59\x59\xa6") + or not 8 * (n - 1) < mli <= 8 * n + or (b != 0 and not bytes_eq(data[-b:], b"\x00" * b)) + ): + raise InvalidUnwrap() + + if b == 0: + return data + else: + return data[:-b] + + +def aes_key_unwrap( + wrapping_key: bytes, + wrapped_key: bytes, + backend: typing.Any = None, +) -> bytes: + if len(wrapped_key) < 24: + raise InvalidUnwrap("Must be at least 24 bytes") + + if len(wrapped_key) % 8 != 0: + raise InvalidUnwrap("The wrapped key must be a multiple of 8 bytes") + + if len(wrapping_key) not in [16, 24, 32]: + raise ValueError("The wrapping key must be a valid AES key length") + + aiv = b"\xa6\xa6\xa6\xa6\xa6\xa6\xa6\xa6" + r = [wrapped_key[i : i + 8] for i in range(0, len(wrapped_key), 8)] + a = r.pop(0) + a, r = _unwrap_core(wrapping_key, a, r) + if not bytes_eq(a, aiv): + raise InvalidUnwrap() + + return b"".join(r) + + +class InvalidUnwrap(Exception): + pass diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/padding.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/padding.py new file mode 100644 index 00000000..d1ca775f --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/padding.py @@ -0,0 +1,204 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc +import typing + +from cryptography import utils +from cryptography.exceptions import AlreadyFinalized +from cryptography.hazmat.bindings._rust import ( + PKCS7PaddingContext, + check_ansix923_padding, + check_pkcs7_padding, +) + + +class PaddingContext(metaclass=abc.ABCMeta): + @abc.abstractmethod + def update(self, data: bytes) -> bytes: + """ + Pads the provided bytes and returns any available data as bytes. + """ + + @abc.abstractmethod + def finalize(self) -> bytes: + """ + Finalize the padding, returns bytes. + """ + + +def _byte_padding_check(block_size: int) -> None: + if not (0 <= block_size <= 2040): + raise ValueError("block_size must be in range(0, 2041).") + + if block_size % 8 != 0: + raise ValueError("block_size must be a multiple of 8.") + + +def _byte_padding_update( + buffer_: bytes | None, data: bytes, block_size: int +) -> tuple[bytes, bytes]: + if buffer_ is None: + raise AlreadyFinalized("Context was already finalized.") + + utils._check_byteslike("data", data) + + buffer_ += bytes(data) + + finished_blocks = len(buffer_) // (block_size // 8) + + result = buffer_[: finished_blocks * (block_size // 8)] + buffer_ = buffer_[finished_blocks * (block_size // 8) :] + + return buffer_, result + + +def _byte_padding_pad( + buffer_: bytes | None, + block_size: int, + paddingfn: typing.Callable[[int], bytes], +) -> bytes: + if buffer_ is None: + raise AlreadyFinalized("Context was already finalized.") + + pad_size = block_size // 8 - len(buffer_) + return buffer_ + paddingfn(pad_size) + + +def _byte_unpadding_update( + buffer_: bytes | None, data: bytes, block_size: int +) -> tuple[bytes, bytes]: + if buffer_ is None: + raise AlreadyFinalized("Context was already finalized.") + + utils._check_byteslike("data", data) + + buffer_ += bytes(data) + + finished_blocks = max(len(buffer_) // (block_size // 8) - 1, 0) + + result = buffer_[: finished_blocks * (block_size // 8)] + buffer_ = buffer_[finished_blocks * (block_size // 8) :] + + return buffer_, result + + +def _byte_unpadding_check( + buffer_: bytes | None, + block_size: int, + checkfn: typing.Callable[[bytes], int], +) -> bytes: + if buffer_ is None: + raise AlreadyFinalized("Context was already finalized.") + + if len(buffer_) != block_size // 8: + raise ValueError("Invalid padding bytes.") + + valid = checkfn(buffer_) + + if not valid: + raise ValueError("Invalid padding bytes.") + + pad_size = buffer_[-1] + return buffer_[:-pad_size] + + +class PKCS7: + def __init__(self, block_size: int): + _byte_padding_check(block_size) + self.block_size = block_size + + def padder(self) -> PaddingContext: + return PKCS7PaddingContext(self.block_size) + + def unpadder(self) -> PaddingContext: + return _PKCS7UnpaddingContext(self.block_size) + + +class _PKCS7UnpaddingContext(PaddingContext): + _buffer: bytes | None + + def __init__(self, block_size: int): + self.block_size = block_size + # TODO: more copies than necessary, we should use zero-buffer (#193) + self._buffer = b"" + + def update(self, data: bytes) -> bytes: + self._buffer, result = _byte_unpadding_update( + self._buffer, data, self.block_size + ) + return result + + def finalize(self) -> bytes: + result = _byte_unpadding_check( + self._buffer, self.block_size, check_pkcs7_padding + ) + self._buffer = None + return result + + +PaddingContext.register(PKCS7PaddingContext) + + +class ANSIX923: + def __init__(self, block_size: int): + _byte_padding_check(block_size) + self.block_size = block_size + + def padder(self) -> PaddingContext: + return _ANSIX923PaddingContext(self.block_size) + + def unpadder(self) -> PaddingContext: + return _ANSIX923UnpaddingContext(self.block_size) + + +class _ANSIX923PaddingContext(PaddingContext): + _buffer: bytes | None + + def __init__(self, block_size: int): + self.block_size = block_size + # TODO: more copies than necessary, we should use zero-buffer (#193) + self._buffer = b"" + + def update(self, data: bytes) -> bytes: + self._buffer, result = _byte_padding_update( + self._buffer, data, self.block_size + ) + return result + + def _padding(self, size: int) -> bytes: + return bytes([0]) * (size - 1) + bytes([size]) + + def finalize(self) -> bytes: + result = _byte_padding_pad( + self._buffer, self.block_size, self._padding + ) + self._buffer = None + return result + + +class _ANSIX923UnpaddingContext(PaddingContext): + _buffer: bytes | None + + def __init__(self, block_size: int): + self.block_size = block_size + # TODO: more copies than necessary, we should use zero-buffer (#193) + self._buffer = b"" + + def update(self, data: bytes) -> bytes: + self._buffer, result = _byte_unpadding_update( + self._buffer, data, self.block_size + ) + return result + + def finalize(self) -> bytes: + result = _byte_unpadding_check( + self._buffer, + self.block_size, + check_ansix923_padding, + ) + self._buffer = None + return result diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/poly1305.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/poly1305.py new file mode 100644 index 00000000..7f5a77a5 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/poly1305.py @@ -0,0 +1,11 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat.bindings._rust import openssl as rust_openssl + +__all__ = ["Poly1305"] + +Poly1305 = rust_openssl.poly1305.Poly1305 diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/__init__.py new file mode 100644 index 00000000..07b2264b --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/__init__.py @@ -0,0 +1,63 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat.primitives._serialization import ( + BestAvailableEncryption, + Encoding, + KeySerializationEncryption, + NoEncryption, + ParameterFormat, + PrivateFormat, + PublicFormat, + _KeySerializationEncryption, +) +from cryptography.hazmat.primitives.serialization.base import ( + load_der_parameters, + load_der_private_key, + load_der_public_key, + load_pem_parameters, + load_pem_private_key, + load_pem_public_key, +) +from cryptography.hazmat.primitives.serialization.ssh import ( + SSHCertificate, + SSHCertificateBuilder, + SSHCertificateType, + SSHCertPrivateKeyTypes, + SSHCertPublicKeyTypes, + SSHPrivateKeyTypes, + SSHPublicKeyTypes, + load_ssh_private_key, + load_ssh_public_identity, + load_ssh_public_key, +) + +__all__ = [ + "BestAvailableEncryption", + "Encoding", + "KeySerializationEncryption", + "NoEncryption", + "ParameterFormat", + "PrivateFormat", + "PublicFormat", + "SSHCertPrivateKeyTypes", + "SSHCertPublicKeyTypes", + "SSHCertificate", + "SSHCertificateBuilder", + "SSHCertificateType", + "SSHPrivateKeyTypes", + "SSHPublicKeyTypes", + "_KeySerializationEncryption", + "load_der_parameters", + "load_der_private_key", + "load_der_public_key", + "load_pem_parameters", + "load_pem_private_key", + "load_pem_public_key", + "load_ssh_private_key", + "load_ssh_public_identity", + "load_ssh_public_key", +] diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/base.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/base.py new file mode 100644 index 00000000..e7c998b7 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/base.py @@ -0,0 +1,14 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from cryptography.hazmat.bindings._rust import openssl as rust_openssl + +load_pem_private_key = rust_openssl.keys.load_pem_private_key +load_der_private_key = rust_openssl.keys.load_der_private_key + +load_pem_public_key = rust_openssl.keys.load_pem_public_key +load_der_public_key = rust_openssl.keys.load_der_public_key + +load_pem_parameters = rust_openssl.dh.from_pem_parameters +load_der_parameters = rust_openssl.dh.from_der_parameters diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/pkcs12.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/pkcs12.py new file mode 100644 index 00000000..549e1f99 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/pkcs12.py @@ -0,0 +1,156 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import typing + +from cryptography import x509 +from cryptography.hazmat.bindings._rust import pkcs12 as rust_pkcs12 +from cryptography.hazmat.primitives import serialization +from cryptography.hazmat.primitives._serialization import PBES as PBES +from cryptography.hazmat.primitives.asymmetric import ( + dsa, + ec, + ed448, + ed25519, + rsa, +) +from cryptography.hazmat.primitives.asymmetric.types import PrivateKeyTypes + +__all__ = [ + "PBES", + "PKCS12Certificate", + "PKCS12KeyAndCertificates", + "PKCS12PrivateKeyTypes", + "load_key_and_certificates", + "load_pkcs12", + "serialize_key_and_certificates", +] + +PKCS12PrivateKeyTypes = typing.Union[ + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ec.EllipticCurvePrivateKey, + ed25519.Ed25519PrivateKey, + ed448.Ed448PrivateKey, +] + + +PKCS12Certificate = rust_pkcs12.PKCS12Certificate + + +class PKCS12KeyAndCertificates: + def __init__( + self, + key: PrivateKeyTypes | None, + cert: PKCS12Certificate | None, + additional_certs: list[PKCS12Certificate], + ): + if key is not None and not isinstance( + key, + ( + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ec.EllipticCurvePrivateKey, + ed25519.Ed25519PrivateKey, + ed448.Ed448PrivateKey, + ), + ): + raise TypeError( + "Key must be RSA, DSA, EllipticCurve, ED25519, or ED448" + " private key, or None." + ) + if cert is not None and not isinstance(cert, PKCS12Certificate): + raise TypeError("cert must be a PKCS12Certificate object or None") + if not all( + isinstance(add_cert, PKCS12Certificate) + for add_cert in additional_certs + ): + raise TypeError( + "all values in additional_certs must be PKCS12Certificate" + " objects" + ) + self._key = key + self._cert = cert + self._additional_certs = additional_certs + + @property + def key(self) -> PrivateKeyTypes | None: + return self._key + + @property + def cert(self) -> PKCS12Certificate | None: + return self._cert + + @property + def additional_certs(self) -> list[PKCS12Certificate]: + return self._additional_certs + + def __eq__(self, other: object) -> bool: + if not isinstance(other, PKCS12KeyAndCertificates): + return NotImplemented + + return ( + self.key == other.key + and self.cert == other.cert + and self.additional_certs == other.additional_certs + ) + + def __hash__(self) -> int: + return hash((self.key, self.cert, tuple(self.additional_certs))) + + def __repr__(self) -> str: + fmt = ( + "" + ) + return fmt.format(self.key, self.cert, self.additional_certs) + + +load_key_and_certificates = rust_pkcs12.load_key_and_certificates +load_pkcs12 = rust_pkcs12.load_pkcs12 + + +_PKCS12CATypes = typing.Union[ + x509.Certificate, + PKCS12Certificate, +] + + +def serialize_key_and_certificates( + name: bytes | None, + key: PKCS12PrivateKeyTypes | None, + cert: x509.Certificate | None, + cas: typing.Iterable[_PKCS12CATypes] | None, + encryption_algorithm: serialization.KeySerializationEncryption, +) -> bytes: + if key is not None and not isinstance( + key, + ( + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ec.EllipticCurvePrivateKey, + ed25519.Ed25519PrivateKey, + ed448.Ed448PrivateKey, + ), + ): + raise TypeError( + "Key must be RSA, DSA, EllipticCurve, ED25519, or ED448" + " private key, or None." + ) + + if not isinstance( + encryption_algorithm, serialization.KeySerializationEncryption + ): + raise TypeError( + "Key encryption algorithm must be a " + "KeySerializationEncryption instance" + ) + + if key is None and cert is None and not cas: + raise ValueError("You must supply at least one of key, cert, or cas") + + return rust_pkcs12.serialize_key_and_certificates( + name, key, cert, cas, encryption_algorithm + ) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/pkcs7.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/pkcs7.py new file mode 100644 index 00000000..97ea9db8 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/pkcs7.py @@ -0,0 +1,336 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import email.base64mime +import email.generator +import email.message +import email.policy +import io +import typing + +from cryptography import utils, x509 +from cryptography.exceptions import UnsupportedAlgorithm, _Reasons +from cryptography.hazmat.bindings._rust import pkcs7 as rust_pkcs7 +from cryptography.hazmat.primitives import hashes, serialization +from cryptography.hazmat.primitives.asymmetric import ec, padding, rsa +from cryptography.utils import _check_byteslike + +load_pem_pkcs7_certificates = rust_pkcs7.load_pem_pkcs7_certificates + +load_der_pkcs7_certificates = rust_pkcs7.load_der_pkcs7_certificates + +serialize_certificates = rust_pkcs7.serialize_certificates + +PKCS7HashTypes = typing.Union[ + hashes.SHA224, + hashes.SHA256, + hashes.SHA384, + hashes.SHA512, +] + +PKCS7PrivateKeyTypes = typing.Union[ + rsa.RSAPrivateKey, ec.EllipticCurvePrivateKey +] + + +class PKCS7Options(utils.Enum): + Text = "Add text/plain MIME type" + Binary = "Don't translate input data into canonical MIME format" + DetachedSignature = "Don't embed data in the PKCS7 structure" + NoCapabilities = "Don't embed SMIME capabilities" + NoAttributes = "Don't embed authenticatedAttributes" + NoCerts = "Don't embed signer certificate" + + +class PKCS7SignatureBuilder: + def __init__( + self, + data: bytes | None = None, + signers: list[ + tuple[ + x509.Certificate, + PKCS7PrivateKeyTypes, + PKCS7HashTypes, + padding.PSS | padding.PKCS1v15 | None, + ] + ] = [], + additional_certs: list[x509.Certificate] = [], + ): + self._data = data + self._signers = signers + self._additional_certs = additional_certs + + def set_data(self, data: bytes) -> PKCS7SignatureBuilder: + _check_byteslike("data", data) + if self._data is not None: + raise ValueError("data may only be set once") + + return PKCS7SignatureBuilder(data, self._signers) + + def add_signer( + self, + certificate: x509.Certificate, + private_key: PKCS7PrivateKeyTypes, + hash_algorithm: PKCS7HashTypes, + *, + rsa_padding: padding.PSS | padding.PKCS1v15 | None = None, + ) -> PKCS7SignatureBuilder: + if not isinstance( + hash_algorithm, + ( + hashes.SHA224, + hashes.SHA256, + hashes.SHA384, + hashes.SHA512, + ), + ): + raise TypeError( + "hash_algorithm must be one of hashes.SHA224, " + "SHA256, SHA384, or SHA512" + ) + if not isinstance(certificate, x509.Certificate): + raise TypeError("certificate must be a x509.Certificate") + + if not isinstance( + private_key, (rsa.RSAPrivateKey, ec.EllipticCurvePrivateKey) + ): + raise TypeError("Only RSA & EC keys are supported at this time.") + + if rsa_padding is not None: + if not isinstance(rsa_padding, (padding.PSS, padding.PKCS1v15)): + raise TypeError("Padding must be PSS or PKCS1v15") + if not isinstance(private_key, rsa.RSAPrivateKey): + raise TypeError("Padding is only supported for RSA keys") + + return PKCS7SignatureBuilder( + self._data, + [ + *self._signers, + (certificate, private_key, hash_algorithm, rsa_padding), + ], + ) + + def add_certificate( + self, certificate: x509.Certificate + ) -> PKCS7SignatureBuilder: + if not isinstance(certificate, x509.Certificate): + raise TypeError("certificate must be a x509.Certificate") + + return PKCS7SignatureBuilder( + self._data, self._signers, [*self._additional_certs, certificate] + ) + + def sign( + self, + encoding: serialization.Encoding, + options: typing.Iterable[PKCS7Options], + backend: typing.Any = None, + ) -> bytes: + if len(self._signers) == 0: + raise ValueError("Must have at least one signer") + if self._data is None: + raise ValueError("You must add data to sign") + options = list(options) + if not all(isinstance(x, PKCS7Options) for x in options): + raise ValueError("options must be from the PKCS7Options enum") + if encoding not in ( + serialization.Encoding.PEM, + serialization.Encoding.DER, + serialization.Encoding.SMIME, + ): + raise ValueError( + "Must be PEM, DER, or SMIME from the Encoding enum" + ) + + # Text is a meaningless option unless it is accompanied by + # DetachedSignature + if ( + PKCS7Options.Text in options + and PKCS7Options.DetachedSignature not in options + ): + raise ValueError( + "When passing the Text option you must also pass " + "DetachedSignature" + ) + + if PKCS7Options.Text in options and encoding in ( + serialization.Encoding.DER, + serialization.Encoding.PEM, + ): + raise ValueError( + "The Text option is only available for SMIME serialization" + ) + + # No attributes implies no capabilities so we'll error if you try to + # pass both. + if ( + PKCS7Options.NoAttributes in options + and PKCS7Options.NoCapabilities in options + ): + raise ValueError( + "NoAttributes is a superset of NoCapabilities. Do not pass " + "both values." + ) + + return rust_pkcs7.sign_and_serialize(self, encoding, options) + + +class PKCS7EnvelopeBuilder: + def __init__( + self, + *, + _data: bytes | None = None, + _recipients: list[x509.Certificate] | None = None, + ): + from cryptography.hazmat.backends.openssl.backend import ( + backend as ossl, + ) + + if not ossl.rsa_encryption_supported(padding=padding.PKCS1v15()): + raise UnsupportedAlgorithm( + "RSA with PKCS1 v1.5 padding is not supported by this version" + " of OpenSSL.", + _Reasons.UNSUPPORTED_PADDING, + ) + self._data = _data + self._recipients = _recipients if _recipients is not None else [] + + def set_data(self, data: bytes) -> PKCS7EnvelopeBuilder: + _check_byteslike("data", data) + if self._data is not None: + raise ValueError("data may only be set once") + + return PKCS7EnvelopeBuilder(_data=data, _recipients=self._recipients) + + def add_recipient( + self, + certificate: x509.Certificate, + ) -> PKCS7EnvelopeBuilder: + if not isinstance(certificate, x509.Certificate): + raise TypeError("certificate must be a x509.Certificate") + + if not isinstance(certificate.public_key(), rsa.RSAPublicKey): + raise TypeError("Only RSA keys are supported at this time.") + + return PKCS7EnvelopeBuilder( + _data=self._data, + _recipients=[ + *self._recipients, + certificate, + ], + ) + + def encrypt( + self, + encoding: serialization.Encoding, + options: typing.Iterable[PKCS7Options], + ) -> bytes: + if len(self._recipients) == 0: + raise ValueError("Must have at least one recipient") + if self._data is None: + raise ValueError("You must add data to encrypt") + options = list(options) + if not all(isinstance(x, PKCS7Options) for x in options): + raise ValueError("options must be from the PKCS7Options enum") + if encoding not in ( + serialization.Encoding.PEM, + serialization.Encoding.DER, + serialization.Encoding.SMIME, + ): + raise ValueError( + "Must be PEM, DER, or SMIME from the Encoding enum" + ) + + # Only allow options that make sense for encryption + if any( + opt not in [PKCS7Options.Text, PKCS7Options.Binary] + for opt in options + ): + raise ValueError( + "Only the following options are supported for encryption: " + "Text, Binary" + ) + elif PKCS7Options.Text in options and PKCS7Options.Binary in options: + # OpenSSL accepts both options at the same time, but ignores Text. + # We fail defensively to avoid unexpected outputs. + raise ValueError( + "Cannot use Binary and Text options at the same time" + ) + + return rust_pkcs7.encrypt_and_serialize(self, encoding, options) + + +def _smime_signed_encode( + data: bytes, signature: bytes, micalg: str, text_mode: bool +) -> bytes: + # This function works pretty hard to replicate what OpenSSL does + # precisely. For good and for ill. + + m = email.message.Message() + m.add_header("MIME-Version", "1.0") + m.add_header( + "Content-Type", + "multipart/signed", + protocol="application/x-pkcs7-signature", + micalg=micalg, + ) + + m.preamble = "This is an S/MIME signed message\n" + + msg_part = OpenSSLMimePart() + msg_part.set_payload(data) + if text_mode: + msg_part.add_header("Content-Type", "text/plain") + m.attach(msg_part) + + sig_part = email.message.MIMEPart() + sig_part.add_header( + "Content-Type", "application/x-pkcs7-signature", name="smime.p7s" + ) + sig_part.add_header("Content-Transfer-Encoding", "base64") + sig_part.add_header( + "Content-Disposition", "attachment", filename="smime.p7s" + ) + sig_part.set_payload( + email.base64mime.body_encode(signature, maxlinelen=65) + ) + del sig_part["MIME-Version"] + m.attach(sig_part) + + fp = io.BytesIO() + g = email.generator.BytesGenerator( + fp, + maxheaderlen=0, + mangle_from_=False, + policy=m.policy.clone(linesep="\r\n"), + ) + g.flatten(m) + return fp.getvalue() + + +def _smime_enveloped_encode(data: bytes) -> bytes: + m = email.message.Message() + m.add_header("MIME-Version", "1.0") + m.add_header("Content-Disposition", "attachment", filename="smime.p7m") + m.add_header( + "Content-Type", + "application/pkcs7-mime", + smime_type="enveloped-data", + name="smime.p7m", + ) + m.add_header("Content-Transfer-Encoding", "base64") + + m.set_payload(email.base64mime.body_encode(data, maxlinelen=65)) + + return m.as_bytes(policy=m.policy.clone(linesep="\n", max_line_length=0)) + + +class OpenSSLMimePart(email.message.MIMEPart): + # A MIMEPart subclass that replicates OpenSSL's behavior of not including + # a newline if there are no headers. + def _write_headers(self, generator) -> None: + if list(self.raw_items()): + generator._write_headers(self) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/ssh.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/ssh.py new file mode 100644 index 00000000..c01afb0c --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/serialization/ssh.py @@ -0,0 +1,1569 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import binascii +import enum +import os +import re +import typing +import warnings +from base64 import encodebytes as _base64_encode +from dataclasses import dataclass + +from cryptography import utils +from cryptography.exceptions import UnsupportedAlgorithm +from cryptography.hazmat.primitives import hashes +from cryptography.hazmat.primitives.asymmetric import ( + dsa, + ec, + ed25519, + padding, + rsa, +) +from cryptography.hazmat.primitives.asymmetric import utils as asym_utils +from cryptography.hazmat.primitives.ciphers import ( + AEADDecryptionContext, + Cipher, + algorithms, + modes, +) +from cryptography.hazmat.primitives.serialization import ( + Encoding, + KeySerializationEncryption, + NoEncryption, + PrivateFormat, + PublicFormat, + _KeySerializationEncryption, +) + +try: + from bcrypt import kdf as _bcrypt_kdf + + _bcrypt_supported = True +except ImportError: + _bcrypt_supported = False + + def _bcrypt_kdf( + password: bytes, + salt: bytes, + desired_key_bytes: int, + rounds: int, + ignore_few_rounds: bool = False, + ) -> bytes: + raise UnsupportedAlgorithm("Need bcrypt module") + + +_SSH_ED25519 = b"ssh-ed25519" +_SSH_RSA = b"ssh-rsa" +_SSH_DSA = b"ssh-dss" +_ECDSA_NISTP256 = b"ecdsa-sha2-nistp256" +_ECDSA_NISTP384 = b"ecdsa-sha2-nistp384" +_ECDSA_NISTP521 = b"ecdsa-sha2-nistp521" +_CERT_SUFFIX = b"-cert-v01@openssh.com" + +# U2F application string suffixed pubkey +_SK_SSH_ED25519 = b"sk-ssh-ed25519@openssh.com" +_SK_SSH_ECDSA_NISTP256 = b"sk-ecdsa-sha2-nistp256@openssh.com" + +# These are not key types, only algorithms, so they cannot appear +# as a public key type +_SSH_RSA_SHA256 = b"rsa-sha2-256" +_SSH_RSA_SHA512 = b"rsa-sha2-512" + +_SSH_PUBKEY_RC = re.compile(rb"\A(\S+)[ \t]+(\S+)") +_SK_MAGIC = b"openssh-key-v1\0" +_SK_START = b"-----BEGIN OPENSSH PRIVATE KEY-----" +_SK_END = b"-----END OPENSSH PRIVATE KEY-----" +_BCRYPT = b"bcrypt" +_NONE = b"none" +_DEFAULT_CIPHER = b"aes256-ctr" +_DEFAULT_ROUNDS = 16 + +# re is only way to work on bytes-like data +_PEM_RC = re.compile(_SK_START + b"(.*?)" + _SK_END, re.DOTALL) + +# padding for max blocksize +_PADDING = memoryview(bytearray(range(1, 1 + 16))) + + +@dataclass +class _SSHCipher: + alg: type[algorithms.AES] + key_len: int + mode: type[modes.CTR] | type[modes.CBC] | type[modes.GCM] + block_len: int + iv_len: int + tag_len: int | None + is_aead: bool + + +# ciphers that are actually used in key wrapping +_SSH_CIPHERS: dict[bytes, _SSHCipher] = { + b"aes256-ctr": _SSHCipher( + alg=algorithms.AES, + key_len=32, + mode=modes.CTR, + block_len=16, + iv_len=16, + tag_len=None, + is_aead=False, + ), + b"aes256-cbc": _SSHCipher( + alg=algorithms.AES, + key_len=32, + mode=modes.CBC, + block_len=16, + iv_len=16, + tag_len=None, + is_aead=False, + ), + b"aes256-gcm@openssh.com": _SSHCipher( + alg=algorithms.AES, + key_len=32, + mode=modes.GCM, + block_len=16, + iv_len=12, + tag_len=16, + is_aead=True, + ), +} + +# map local curve name to key type +_ECDSA_KEY_TYPE = { + "secp256r1": _ECDSA_NISTP256, + "secp384r1": _ECDSA_NISTP384, + "secp521r1": _ECDSA_NISTP521, +} + + +def _get_ssh_key_type(key: SSHPrivateKeyTypes | SSHPublicKeyTypes) -> bytes: + if isinstance(key, ec.EllipticCurvePrivateKey): + key_type = _ecdsa_key_type(key.public_key()) + elif isinstance(key, ec.EllipticCurvePublicKey): + key_type = _ecdsa_key_type(key) + elif isinstance(key, (rsa.RSAPrivateKey, rsa.RSAPublicKey)): + key_type = _SSH_RSA + elif isinstance(key, (dsa.DSAPrivateKey, dsa.DSAPublicKey)): + key_type = _SSH_DSA + elif isinstance( + key, (ed25519.Ed25519PrivateKey, ed25519.Ed25519PublicKey) + ): + key_type = _SSH_ED25519 + else: + raise ValueError("Unsupported key type") + + return key_type + + +def _ecdsa_key_type(public_key: ec.EllipticCurvePublicKey) -> bytes: + """Return SSH key_type and curve_name for private key.""" + curve = public_key.curve + if curve.name not in _ECDSA_KEY_TYPE: + raise ValueError( + f"Unsupported curve for ssh private key: {curve.name!r}" + ) + return _ECDSA_KEY_TYPE[curve.name] + + +def _ssh_pem_encode( + data: bytes, + prefix: bytes = _SK_START + b"\n", + suffix: bytes = _SK_END + b"\n", +) -> bytes: + return b"".join([prefix, _base64_encode(data), suffix]) + + +def _check_block_size(data: bytes, block_len: int) -> None: + """Require data to be full blocks""" + if not data or len(data) % block_len != 0: + raise ValueError("Corrupt data: missing padding") + + +def _check_empty(data: bytes) -> None: + """All data should have been parsed.""" + if data: + raise ValueError("Corrupt data: unparsed data") + + +def _init_cipher( + ciphername: bytes, + password: bytes | None, + salt: bytes, + rounds: int, +) -> Cipher[modes.CBC | modes.CTR | modes.GCM]: + """Generate key + iv and return cipher.""" + if not password: + raise ValueError("Key is password-protected.") + + ciph = _SSH_CIPHERS[ciphername] + seed = _bcrypt_kdf( + password, salt, ciph.key_len + ciph.iv_len, rounds, True + ) + return Cipher( + ciph.alg(seed[: ciph.key_len]), + ciph.mode(seed[ciph.key_len :]), + ) + + +def _get_u32(data: memoryview) -> tuple[int, memoryview]: + """Uint32""" + if len(data) < 4: + raise ValueError("Invalid data") + return int.from_bytes(data[:4], byteorder="big"), data[4:] + + +def _get_u64(data: memoryview) -> tuple[int, memoryview]: + """Uint64""" + if len(data) < 8: + raise ValueError("Invalid data") + return int.from_bytes(data[:8], byteorder="big"), data[8:] + + +def _get_sshstr(data: memoryview) -> tuple[memoryview, memoryview]: + """Bytes with u32 length prefix""" + n, data = _get_u32(data) + if n > len(data): + raise ValueError("Invalid data") + return data[:n], data[n:] + + +def _get_mpint(data: memoryview) -> tuple[int, memoryview]: + """Big integer.""" + val, data = _get_sshstr(data) + if val and val[0] > 0x7F: + raise ValueError("Invalid data") + return int.from_bytes(val, "big"), data + + +def _to_mpint(val: int) -> bytes: + """Storage format for signed bigint.""" + if val < 0: + raise ValueError("negative mpint not allowed") + if not val: + return b"" + nbytes = (val.bit_length() + 8) // 8 + return utils.int_to_bytes(val, nbytes) + + +class _FragList: + """Build recursive structure without data copy.""" + + flist: list[bytes] + + def __init__(self, init: list[bytes] | None = None) -> None: + self.flist = [] + if init: + self.flist.extend(init) + + def put_raw(self, val: bytes) -> None: + """Add plain bytes""" + self.flist.append(val) + + def put_u32(self, val: int) -> None: + """Big-endian uint32""" + self.flist.append(val.to_bytes(length=4, byteorder="big")) + + def put_u64(self, val: int) -> None: + """Big-endian uint64""" + self.flist.append(val.to_bytes(length=8, byteorder="big")) + + def put_sshstr(self, val: bytes | _FragList) -> None: + """Bytes prefixed with u32 length""" + if isinstance(val, (bytes, memoryview, bytearray)): + self.put_u32(len(val)) + self.flist.append(val) + else: + self.put_u32(val.size()) + self.flist.extend(val.flist) + + def put_mpint(self, val: int) -> None: + """Big-endian bigint prefixed with u32 length""" + self.put_sshstr(_to_mpint(val)) + + def size(self) -> int: + """Current number of bytes""" + return sum(map(len, self.flist)) + + def render(self, dstbuf: memoryview, pos: int = 0) -> int: + """Write into bytearray""" + for frag in self.flist: + flen = len(frag) + start, pos = pos, pos + flen + dstbuf[start:pos] = frag + return pos + + def tobytes(self) -> bytes: + """Return as bytes""" + buf = memoryview(bytearray(self.size())) + self.render(buf) + return buf.tobytes() + + +class _SSHFormatRSA: + """Format for RSA keys. + + Public: + mpint e, n + Private: + mpint n, e, d, iqmp, p, q + """ + + def get_public( + self, data: memoryview + ) -> tuple[tuple[int, int], memoryview]: + """RSA public fields""" + e, data = _get_mpint(data) + n, data = _get_mpint(data) + return (e, n), data + + def load_public( + self, data: memoryview + ) -> tuple[rsa.RSAPublicKey, memoryview]: + """Make RSA public key from data.""" + (e, n), data = self.get_public(data) + public_numbers = rsa.RSAPublicNumbers(e, n) + public_key = public_numbers.public_key() + return public_key, data + + def load_private( + self, data: memoryview, pubfields + ) -> tuple[rsa.RSAPrivateKey, memoryview]: + """Make RSA private key from data.""" + n, data = _get_mpint(data) + e, data = _get_mpint(data) + d, data = _get_mpint(data) + iqmp, data = _get_mpint(data) + p, data = _get_mpint(data) + q, data = _get_mpint(data) + + if (e, n) != pubfields: + raise ValueError("Corrupt data: rsa field mismatch") + dmp1 = rsa.rsa_crt_dmp1(d, p) + dmq1 = rsa.rsa_crt_dmq1(d, q) + public_numbers = rsa.RSAPublicNumbers(e, n) + private_numbers = rsa.RSAPrivateNumbers( + p, q, d, dmp1, dmq1, iqmp, public_numbers + ) + private_key = private_numbers.private_key() + return private_key, data + + def encode_public( + self, public_key: rsa.RSAPublicKey, f_pub: _FragList + ) -> None: + """Write RSA public key""" + pubn = public_key.public_numbers() + f_pub.put_mpint(pubn.e) + f_pub.put_mpint(pubn.n) + + def encode_private( + self, private_key: rsa.RSAPrivateKey, f_priv: _FragList + ) -> None: + """Write RSA private key""" + private_numbers = private_key.private_numbers() + public_numbers = private_numbers.public_numbers + + f_priv.put_mpint(public_numbers.n) + f_priv.put_mpint(public_numbers.e) + + f_priv.put_mpint(private_numbers.d) + f_priv.put_mpint(private_numbers.iqmp) + f_priv.put_mpint(private_numbers.p) + f_priv.put_mpint(private_numbers.q) + + +class _SSHFormatDSA: + """Format for DSA keys. + + Public: + mpint p, q, g, y + Private: + mpint p, q, g, y, x + """ + + def get_public(self, data: memoryview) -> tuple[tuple, memoryview]: + """DSA public fields""" + p, data = _get_mpint(data) + q, data = _get_mpint(data) + g, data = _get_mpint(data) + y, data = _get_mpint(data) + return (p, q, g, y), data + + def load_public( + self, data: memoryview + ) -> tuple[dsa.DSAPublicKey, memoryview]: + """Make DSA public key from data.""" + (p, q, g, y), data = self.get_public(data) + parameter_numbers = dsa.DSAParameterNumbers(p, q, g) + public_numbers = dsa.DSAPublicNumbers(y, parameter_numbers) + self._validate(public_numbers) + public_key = public_numbers.public_key() + return public_key, data + + def load_private( + self, data: memoryview, pubfields + ) -> tuple[dsa.DSAPrivateKey, memoryview]: + """Make DSA private key from data.""" + (p, q, g, y), data = self.get_public(data) + x, data = _get_mpint(data) + + if (p, q, g, y) != pubfields: + raise ValueError("Corrupt data: dsa field mismatch") + parameter_numbers = dsa.DSAParameterNumbers(p, q, g) + public_numbers = dsa.DSAPublicNumbers(y, parameter_numbers) + self._validate(public_numbers) + private_numbers = dsa.DSAPrivateNumbers(x, public_numbers) + private_key = private_numbers.private_key() + return private_key, data + + def encode_public( + self, public_key: dsa.DSAPublicKey, f_pub: _FragList + ) -> None: + """Write DSA public key""" + public_numbers = public_key.public_numbers() + parameter_numbers = public_numbers.parameter_numbers + self._validate(public_numbers) + + f_pub.put_mpint(parameter_numbers.p) + f_pub.put_mpint(parameter_numbers.q) + f_pub.put_mpint(parameter_numbers.g) + f_pub.put_mpint(public_numbers.y) + + def encode_private( + self, private_key: dsa.DSAPrivateKey, f_priv: _FragList + ) -> None: + """Write DSA private key""" + self.encode_public(private_key.public_key(), f_priv) + f_priv.put_mpint(private_key.private_numbers().x) + + def _validate(self, public_numbers: dsa.DSAPublicNumbers) -> None: + parameter_numbers = public_numbers.parameter_numbers + if parameter_numbers.p.bit_length() != 1024: + raise ValueError("SSH supports only 1024 bit DSA keys") + + +class _SSHFormatECDSA: + """Format for ECDSA keys. + + Public: + str curve + bytes point + Private: + str curve + bytes point + mpint secret + """ + + def __init__(self, ssh_curve_name: bytes, curve: ec.EllipticCurve): + self.ssh_curve_name = ssh_curve_name + self.curve = curve + + def get_public( + self, data: memoryview + ) -> tuple[tuple[memoryview, memoryview], memoryview]: + """ECDSA public fields""" + curve, data = _get_sshstr(data) + point, data = _get_sshstr(data) + if curve != self.ssh_curve_name: + raise ValueError("Curve name mismatch") + if point[0] != 4: + raise NotImplementedError("Need uncompressed point") + return (curve, point), data + + def load_public( + self, data: memoryview + ) -> tuple[ec.EllipticCurvePublicKey, memoryview]: + """Make ECDSA public key from data.""" + (_, point), data = self.get_public(data) + public_key = ec.EllipticCurvePublicKey.from_encoded_point( + self.curve, point.tobytes() + ) + return public_key, data + + def load_private( + self, data: memoryview, pubfields + ) -> tuple[ec.EllipticCurvePrivateKey, memoryview]: + """Make ECDSA private key from data.""" + (curve_name, point), data = self.get_public(data) + secret, data = _get_mpint(data) + + if (curve_name, point) != pubfields: + raise ValueError("Corrupt data: ecdsa field mismatch") + private_key = ec.derive_private_key(secret, self.curve) + return private_key, data + + def encode_public( + self, public_key: ec.EllipticCurvePublicKey, f_pub: _FragList + ) -> None: + """Write ECDSA public key""" + point = public_key.public_bytes( + Encoding.X962, PublicFormat.UncompressedPoint + ) + f_pub.put_sshstr(self.ssh_curve_name) + f_pub.put_sshstr(point) + + def encode_private( + self, private_key: ec.EllipticCurvePrivateKey, f_priv: _FragList + ) -> None: + """Write ECDSA private key""" + public_key = private_key.public_key() + private_numbers = private_key.private_numbers() + + self.encode_public(public_key, f_priv) + f_priv.put_mpint(private_numbers.private_value) + + +class _SSHFormatEd25519: + """Format for Ed25519 keys. + + Public: + bytes point + Private: + bytes point + bytes secret_and_point + """ + + def get_public( + self, data: memoryview + ) -> tuple[tuple[memoryview], memoryview]: + """Ed25519 public fields""" + point, data = _get_sshstr(data) + return (point,), data + + def load_public( + self, data: memoryview + ) -> tuple[ed25519.Ed25519PublicKey, memoryview]: + """Make Ed25519 public key from data.""" + (point,), data = self.get_public(data) + public_key = ed25519.Ed25519PublicKey.from_public_bytes( + point.tobytes() + ) + return public_key, data + + def load_private( + self, data: memoryview, pubfields + ) -> tuple[ed25519.Ed25519PrivateKey, memoryview]: + """Make Ed25519 private key from data.""" + (point,), data = self.get_public(data) + keypair, data = _get_sshstr(data) + + secret = keypair[:32] + point2 = keypair[32:] + if point != point2 or (point,) != pubfields: + raise ValueError("Corrupt data: ed25519 field mismatch") + private_key = ed25519.Ed25519PrivateKey.from_private_bytes(secret) + return private_key, data + + def encode_public( + self, public_key: ed25519.Ed25519PublicKey, f_pub: _FragList + ) -> None: + """Write Ed25519 public key""" + raw_public_key = public_key.public_bytes( + Encoding.Raw, PublicFormat.Raw + ) + f_pub.put_sshstr(raw_public_key) + + def encode_private( + self, private_key: ed25519.Ed25519PrivateKey, f_priv: _FragList + ) -> None: + """Write Ed25519 private key""" + public_key = private_key.public_key() + raw_private_key = private_key.private_bytes( + Encoding.Raw, PrivateFormat.Raw, NoEncryption() + ) + raw_public_key = public_key.public_bytes( + Encoding.Raw, PublicFormat.Raw + ) + f_keypair = _FragList([raw_private_key, raw_public_key]) + + self.encode_public(public_key, f_priv) + f_priv.put_sshstr(f_keypair) + + +def load_application(data) -> tuple[memoryview, memoryview]: + """ + U2F application strings + """ + application, data = _get_sshstr(data) + if not application.tobytes().startswith(b"ssh:"): + raise ValueError( + "U2F application string does not start with b'ssh:' " + f"({application})" + ) + return application, data + + +class _SSHFormatSKEd25519: + """ + The format of a sk-ssh-ed25519@openssh.com public key is: + + string "sk-ssh-ed25519@openssh.com" + string public key + string application (user-specified, but typically "ssh:") + """ + + def load_public( + self, data: memoryview + ) -> tuple[ed25519.Ed25519PublicKey, memoryview]: + """Make Ed25519 public key from data.""" + public_key, data = _lookup_kformat(_SSH_ED25519).load_public(data) + _, data = load_application(data) + return public_key, data + + +class _SSHFormatSKECDSA: + """ + The format of a sk-ecdsa-sha2-nistp256@openssh.com public key is: + + string "sk-ecdsa-sha2-nistp256@openssh.com" + string curve name + ec_point Q + string application (user-specified, but typically "ssh:") + """ + + def load_public( + self, data: memoryview + ) -> tuple[ec.EllipticCurvePublicKey, memoryview]: + """Make ECDSA public key from data.""" + public_key, data = _lookup_kformat(_ECDSA_NISTP256).load_public(data) + _, data = load_application(data) + return public_key, data + + +_KEY_FORMATS = { + _SSH_RSA: _SSHFormatRSA(), + _SSH_DSA: _SSHFormatDSA(), + _SSH_ED25519: _SSHFormatEd25519(), + _ECDSA_NISTP256: _SSHFormatECDSA(b"nistp256", ec.SECP256R1()), + _ECDSA_NISTP384: _SSHFormatECDSA(b"nistp384", ec.SECP384R1()), + _ECDSA_NISTP521: _SSHFormatECDSA(b"nistp521", ec.SECP521R1()), + _SK_SSH_ED25519: _SSHFormatSKEd25519(), + _SK_SSH_ECDSA_NISTP256: _SSHFormatSKECDSA(), +} + + +def _lookup_kformat(key_type: bytes): + """Return valid format or throw error""" + if not isinstance(key_type, bytes): + key_type = memoryview(key_type).tobytes() + if key_type in _KEY_FORMATS: + return _KEY_FORMATS[key_type] + raise UnsupportedAlgorithm(f"Unsupported key type: {key_type!r}") + + +SSHPrivateKeyTypes = typing.Union[ + ec.EllipticCurvePrivateKey, + rsa.RSAPrivateKey, + dsa.DSAPrivateKey, + ed25519.Ed25519PrivateKey, +] + + +def load_ssh_private_key( + data: bytes, + password: bytes | None, + backend: typing.Any = None, +) -> SSHPrivateKeyTypes: + """Load private key from OpenSSH custom encoding.""" + utils._check_byteslike("data", data) + if password is not None: + utils._check_bytes("password", password) + + m = _PEM_RC.search(data) + if not m: + raise ValueError("Not OpenSSH private key format") + p1 = m.start(1) + p2 = m.end(1) + data = binascii.a2b_base64(memoryview(data)[p1:p2]) + if not data.startswith(_SK_MAGIC): + raise ValueError("Not OpenSSH private key format") + data = memoryview(data)[len(_SK_MAGIC) :] + + # parse header + ciphername, data = _get_sshstr(data) + kdfname, data = _get_sshstr(data) + kdfoptions, data = _get_sshstr(data) + nkeys, data = _get_u32(data) + if nkeys != 1: + raise ValueError("Only one key supported") + + # load public key data + pubdata, data = _get_sshstr(data) + pub_key_type, pubdata = _get_sshstr(pubdata) + kformat = _lookup_kformat(pub_key_type) + pubfields, pubdata = kformat.get_public(pubdata) + _check_empty(pubdata) + + if (ciphername, kdfname) != (_NONE, _NONE): + ciphername_bytes = ciphername.tobytes() + if ciphername_bytes not in _SSH_CIPHERS: + raise UnsupportedAlgorithm( + f"Unsupported cipher: {ciphername_bytes!r}" + ) + if kdfname != _BCRYPT: + raise UnsupportedAlgorithm(f"Unsupported KDF: {kdfname!r}") + blklen = _SSH_CIPHERS[ciphername_bytes].block_len + tag_len = _SSH_CIPHERS[ciphername_bytes].tag_len + # load secret data + edata, data = _get_sshstr(data) + # see https://bugzilla.mindrot.org/show_bug.cgi?id=3553 for + # information about how OpenSSH handles AEAD tags + if _SSH_CIPHERS[ciphername_bytes].is_aead: + tag = bytes(data) + if len(tag) != tag_len: + raise ValueError("Corrupt data: invalid tag length for cipher") + else: + _check_empty(data) + _check_block_size(edata, blklen) + salt, kbuf = _get_sshstr(kdfoptions) + rounds, kbuf = _get_u32(kbuf) + _check_empty(kbuf) + ciph = _init_cipher(ciphername_bytes, password, salt.tobytes(), rounds) + dec = ciph.decryptor() + edata = memoryview(dec.update(edata)) + if _SSH_CIPHERS[ciphername_bytes].is_aead: + assert isinstance(dec, AEADDecryptionContext) + _check_empty(dec.finalize_with_tag(tag)) + else: + # _check_block_size requires data to be a full block so there + # should be no output from finalize + _check_empty(dec.finalize()) + else: + # load secret data + edata, data = _get_sshstr(data) + _check_empty(data) + blklen = 8 + _check_block_size(edata, blklen) + ck1, edata = _get_u32(edata) + ck2, edata = _get_u32(edata) + if ck1 != ck2: + raise ValueError("Corrupt data: broken checksum") + + # load per-key struct + key_type, edata = _get_sshstr(edata) + if key_type != pub_key_type: + raise ValueError("Corrupt data: key type mismatch") + private_key, edata = kformat.load_private(edata, pubfields) + # We don't use the comment + _, edata = _get_sshstr(edata) + + # yes, SSH does padding check *after* all other parsing is done. + # need to follow as it writes zero-byte padding too. + if edata != _PADDING[: len(edata)]: + raise ValueError("Corrupt data: invalid padding") + + if isinstance(private_key, dsa.DSAPrivateKey): + warnings.warn( + "SSH DSA keys are deprecated and will be removed in a future " + "release.", + utils.DeprecatedIn40, + stacklevel=2, + ) + + return private_key + + +def _serialize_ssh_private_key( + private_key: SSHPrivateKeyTypes, + password: bytes, + encryption_algorithm: KeySerializationEncryption, +) -> bytes: + """Serialize private key with OpenSSH custom encoding.""" + utils._check_bytes("password", password) + if isinstance(private_key, dsa.DSAPrivateKey): + warnings.warn( + "SSH DSA key support is deprecated and will be " + "removed in a future release", + utils.DeprecatedIn40, + stacklevel=4, + ) + + key_type = _get_ssh_key_type(private_key) + kformat = _lookup_kformat(key_type) + + # setup parameters + f_kdfoptions = _FragList() + if password: + ciphername = _DEFAULT_CIPHER + blklen = _SSH_CIPHERS[ciphername].block_len + kdfname = _BCRYPT + rounds = _DEFAULT_ROUNDS + if ( + isinstance(encryption_algorithm, _KeySerializationEncryption) + and encryption_algorithm._kdf_rounds is not None + ): + rounds = encryption_algorithm._kdf_rounds + salt = os.urandom(16) + f_kdfoptions.put_sshstr(salt) + f_kdfoptions.put_u32(rounds) + ciph = _init_cipher(ciphername, password, salt, rounds) + else: + ciphername = kdfname = _NONE + blklen = 8 + ciph = None + nkeys = 1 + checkval = os.urandom(4) + comment = b"" + + # encode public and private parts together + f_public_key = _FragList() + f_public_key.put_sshstr(key_type) + kformat.encode_public(private_key.public_key(), f_public_key) + + f_secrets = _FragList([checkval, checkval]) + f_secrets.put_sshstr(key_type) + kformat.encode_private(private_key, f_secrets) + f_secrets.put_sshstr(comment) + f_secrets.put_raw(_PADDING[: blklen - (f_secrets.size() % blklen)]) + + # top-level structure + f_main = _FragList() + f_main.put_raw(_SK_MAGIC) + f_main.put_sshstr(ciphername) + f_main.put_sshstr(kdfname) + f_main.put_sshstr(f_kdfoptions) + f_main.put_u32(nkeys) + f_main.put_sshstr(f_public_key) + f_main.put_sshstr(f_secrets) + + # copy result info bytearray + slen = f_secrets.size() + mlen = f_main.size() + buf = memoryview(bytearray(mlen + blklen)) + f_main.render(buf) + ofs = mlen - slen + + # encrypt in-place + if ciph is not None: + ciph.encryptor().update_into(buf[ofs:mlen], buf[ofs:]) + + return _ssh_pem_encode(buf[:mlen]) + + +SSHPublicKeyTypes = typing.Union[ + ec.EllipticCurvePublicKey, + rsa.RSAPublicKey, + dsa.DSAPublicKey, + ed25519.Ed25519PublicKey, +] + +SSHCertPublicKeyTypes = typing.Union[ + ec.EllipticCurvePublicKey, + rsa.RSAPublicKey, + ed25519.Ed25519PublicKey, +] + + +class SSHCertificateType(enum.Enum): + USER = 1 + HOST = 2 + + +class SSHCertificate: + def __init__( + self, + _nonce: memoryview, + _public_key: SSHPublicKeyTypes, + _serial: int, + _cctype: int, + _key_id: memoryview, + _valid_principals: list[bytes], + _valid_after: int, + _valid_before: int, + _critical_options: dict[bytes, bytes], + _extensions: dict[bytes, bytes], + _sig_type: memoryview, + _sig_key: memoryview, + _inner_sig_type: memoryview, + _signature: memoryview, + _tbs_cert_body: memoryview, + _cert_key_type: bytes, + _cert_body: memoryview, + ): + self._nonce = _nonce + self._public_key = _public_key + self._serial = _serial + try: + self._type = SSHCertificateType(_cctype) + except ValueError: + raise ValueError("Invalid certificate type") + self._key_id = _key_id + self._valid_principals = _valid_principals + self._valid_after = _valid_after + self._valid_before = _valid_before + self._critical_options = _critical_options + self._extensions = _extensions + self._sig_type = _sig_type + self._sig_key = _sig_key + self._inner_sig_type = _inner_sig_type + self._signature = _signature + self._cert_key_type = _cert_key_type + self._cert_body = _cert_body + self._tbs_cert_body = _tbs_cert_body + + @property + def nonce(self) -> bytes: + return bytes(self._nonce) + + def public_key(self) -> SSHCertPublicKeyTypes: + # make mypy happy until we remove DSA support entirely and + # the underlying union won't have a disallowed type + return typing.cast(SSHCertPublicKeyTypes, self._public_key) + + @property + def serial(self) -> int: + return self._serial + + @property + def type(self) -> SSHCertificateType: + return self._type + + @property + def key_id(self) -> bytes: + return bytes(self._key_id) + + @property + def valid_principals(self) -> list[bytes]: + return self._valid_principals + + @property + def valid_before(self) -> int: + return self._valid_before + + @property + def valid_after(self) -> int: + return self._valid_after + + @property + def critical_options(self) -> dict[bytes, bytes]: + return self._critical_options + + @property + def extensions(self) -> dict[bytes, bytes]: + return self._extensions + + def signature_key(self) -> SSHCertPublicKeyTypes: + sigformat = _lookup_kformat(self._sig_type) + signature_key, sigkey_rest = sigformat.load_public(self._sig_key) + _check_empty(sigkey_rest) + return signature_key + + def public_bytes(self) -> bytes: + return ( + bytes(self._cert_key_type) + + b" " + + binascii.b2a_base64(bytes(self._cert_body), newline=False) + ) + + def verify_cert_signature(self) -> None: + signature_key = self.signature_key() + if isinstance(signature_key, ed25519.Ed25519PublicKey): + signature_key.verify( + bytes(self._signature), bytes(self._tbs_cert_body) + ) + elif isinstance(signature_key, ec.EllipticCurvePublicKey): + # The signature is encoded as a pair of big-endian integers + r, data = _get_mpint(self._signature) + s, data = _get_mpint(data) + _check_empty(data) + computed_sig = asym_utils.encode_dss_signature(r, s) + hash_alg = _get_ec_hash_alg(signature_key.curve) + signature_key.verify( + computed_sig, bytes(self._tbs_cert_body), ec.ECDSA(hash_alg) + ) + else: + assert isinstance(signature_key, rsa.RSAPublicKey) + if self._inner_sig_type == _SSH_RSA: + hash_alg = hashes.SHA1() + elif self._inner_sig_type == _SSH_RSA_SHA256: + hash_alg = hashes.SHA256() + else: + assert self._inner_sig_type == _SSH_RSA_SHA512 + hash_alg = hashes.SHA512() + signature_key.verify( + bytes(self._signature), + bytes(self._tbs_cert_body), + padding.PKCS1v15(), + hash_alg, + ) + + +def _get_ec_hash_alg(curve: ec.EllipticCurve) -> hashes.HashAlgorithm: + if isinstance(curve, ec.SECP256R1): + return hashes.SHA256() + elif isinstance(curve, ec.SECP384R1): + return hashes.SHA384() + else: + assert isinstance(curve, ec.SECP521R1) + return hashes.SHA512() + + +def _load_ssh_public_identity( + data: bytes, + _legacy_dsa_allowed=False, +) -> SSHCertificate | SSHPublicKeyTypes: + utils._check_byteslike("data", data) + + m = _SSH_PUBKEY_RC.match(data) + if not m: + raise ValueError("Invalid line format") + key_type = orig_key_type = m.group(1) + key_body = m.group(2) + with_cert = False + if key_type.endswith(_CERT_SUFFIX): + with_cert = True + key_type = key_type[: -len(_CERT_SUFFIX)] + if key_type == _SSH_DSA and not _legacy_dsa_allowed: + raise UnsupportedAlgorithm( + "DSA keys aren't supported in SSH certificates" + ) + kformat = _lookup_kformat(key_type) + + try: + rest = memoryview(binascii.a2b_base64(key_body)) + except (TypeError, binascii.Error): + raise ValueError("Invalid format") + + if with_cert: + cert_body = rest + inner_key_type, rest = _get_sshstr(rest) + if inner_key_type != orig_key_type: + raise ValueError("Invalid key format") + if with_cert: + nonce, rest = _get_sshstr(rest) + public_key, rest = kformat.load_public(rest) + if with_cert: + serial, rest = _get_u64(rest) + cctype, rest = _get_u32(rest) + key_id, rest = _get_sshstr(rest) + principals, rest = _get_sshstr(rest) + valid_principals = [] + while principals: + principal, principals = _get_sshstr(principals) + valid_principals.append(bytes(principal)) + valid_after, rest = _get_u64(rest) + valid_before, rest = _get_u64(rest) + crit_options, rest = _get_sshstr(rest) + critical_options = _parse_exts_opts(crit_options) + exts, rest = _get_sshstr(rest) + extensions = _parse_exts_opts(exts) + # Get the reserved field, which is unused. + _, rest = _get_sshstr(rest) + sig_key_raw, rest = _get_sshstr(rest) + sig_type, sig_key = _get_sshstr(sig_key_raw) + if sig_type == _SSH_DSA and not _legacy_dsa_allowed: + raise UnsupportedAlgorithm( + "DSA signatures aren't supported in SSH certificates" + ) + # Get the entire cert body and subtract the signature + tbs_cert_body = cert_body[: -len(rest)] + signature_raw, rest = _get_sshstr(rest) + _check_empty(rest) + inner_sig_type, sig_rest = _get_sshstr(signature_raw) + # RSA certs can have multiple algorithm types + if ( + sig_type == _SSH_RSA + and inner_sig_type + not in [_SSH_RSA_SHA256, _SSH_RSA_SHA512, _SSH_RSA] + ) or (sig_type != _SSH_RSA and inner_sig_type != sig_type): + raise ValueError("Signature key type does not match") + signature, sig_rest = _get_sshstr(sig_rest) + _check_empty(sig_rest) + return SSHCertificate( + nonce, + public_key, + serial, + cctype, + key_id, + valid_principals, + valid_after, + valid_before, + critical_options, + extensions, + sig_type, + sig_key, + inner_sig_type, + signature, + tbs_cert_body, + orig_key_type, + cert_body, + ) + else: + _check_empty(rest) + return public_key + + +def load_ssh_public_identity( + data: bytes, +) -> SSHCertificate | SSHPublicKeyTypes: + return _load_ssh_public_identity(data) + + +def _parse_exts_opts(exts_opts: memoryview) -> dict[bytes, bytes]: + result: dict[bytes, bytes] = {} + last_name = None + while exts_opts: + name, exts_opts = _get_sshstr(exts_opts) + bname: bytes = bytes(name) + if bname in result: + raise ValueError("Duplicate name") + if last_name is not None and bname < last_name: + raise ValueError("Fields not lexically sorted") + value, exts_opts = _get_sshstr(exts_opts) + if len(value) > 0: + value, extra = _get_sshstr(value) + if len(extra) > 0: + raise ValueError("Unexpected extra data after value") + result[bname] = bytes(value) + last_name = bname + return result + + +def load_ssh_public_key( + data: bytes, backend: typing.Any = None +) -> SSHPublicKeyTypes: + cert_or_key = _load_ssh_public_identity(data, _legacy_dsa_allowed=True) + public_key: SSHPublicKeyTypes + if isinstance(cert_or_key, SSHCertificate): + public_key = cert_or_key.public_key() + else: + public_key = cert_or_key + + if isinstance(public_key, dsa.DSAPublicKey): + warnings.warn( + "SSH DSA keys are deprecated and will be removed in a future " + "release.", + utils.DeprecatedIn40, + stacklevel=2, + ) + return public_key + + +def serialize_ssh_public_key(public_key: SSHPublicKeyTypes) -> bytes: + """One-line public key format for OpenSSH""" + if isinstance(public_key, dsa.DSAPublicKey): + warnings.warn( + "SSH DSA key support is deprecated and will be " + "removed in a future release", + utils.DeprecatedIn40, + stacklevel=4, + ) + key_type = _get_ssh_key_type(public_key) + kformat = _lookup_kformat(key_type) + + f_pub = _FragList() + f_pub.put_sshstr(key_type) + kformat.encode_public(public_key, f_pub) + + pub = binascii.b2a_base64(f_pub.tobytes()).strip() + return b"".join([key_type, b" ", pub]) + + +SSHCertPrivateKeyTypes = typing.Union[ + ec.EllipticCurvePrivateKey, + rsa.RSAPrivateKey, + ed25519.Ed25519PrivateKey, +] + + +# This is an undocumented limit enforced in the openssh codebase for sshd and +# ssh-keygen, but it is undefined in the ssh certificates spec. +_SSHKEY_CERT_MAX_PRINCIPALS = 256 + + +class SSHCertificateBuilder: + def __init__( + self, + _public_key: SSHCertPublicKeyTypes | None = None, + _serial: int | None = None, + _type: SSHCertificateType | None = None, + _key_id: bytes | None = None, + _valid_principals: list[bytes] = [], + _valid_for_all_principals: bool = False, + _valid_before: int | None = None, + _valid_after: int | None = None, + _critical_options: list[tuple[bytes, bytes]] = [], + _extensions: list[tuple[bytes, bytes]] = [], + ): + self._public_key = _public_key + self._serial = _serial + self._type = _type + self._key_id = _key_id + self._valid_principals = _valid_principals + self._valid_for_all_principals = _valid_for_all_principals + self._valid_before = _valid_before + self._valid_after = _valid_after + self._critical_options = _critical_options + self._extensions = _extensions + + def public_key( + self, public_key: SSHCertPublicKeyTypes + ) -> SSHCertificateBuilder: + if not isinstance( + public_key, + ( + ec.EllipticCurvePublicKey, + rsa.RSAPublicKey, + ed25519.Ed25519PublicKey, + ), + ): + raise TypeError("Unsupported key type") + if self._public_key is not None: + raise ValueError("public_key already set") + + return SSHCertificateBuilder( + _public_key=public_key, + _serial=self._serial, + _type=self._type, + _key_id=self._key_id, + _valid_principals=self._valid_principals, + _valid_for_all_principals=self._valid_for_all_principals, + _valid_before=self._valid_before, + _valid_after=self._valid_after, + _critical_options=self._critical_options, + _extensions=self._extensions, + ) + + def serial(self, serial: int) -> SSHCertificateBuilder: + if not isinstance(serial, int): + raise TypeError("serial must be an integer") + if not 0 <= serial < 2**64: + raise ValueError("serial must be between 0 and 2**64") + if self._serial is not None: + raise ValueError("serial already set") + + return SSHCertificateBuilder( + _public_key=self._public_key, + _serial=serial, + _type=self._type, + _key_id=self._key_id, + _valid_principals=self._valid_principals, + _valid_for_all_principals=self._valid_for_all_principals, + _valid_before=self._valid_before, + _valid_after=self._valid_after, + _critical_options=self._critical_options, + _extensions=self._extensions, + ) + + def type(self, type: SSHCertificateType) -> SSHCertificateBuilder: + if not isinstance(type, SSHCertificateType): + raise TypeError("type must be an SSHCertificateType") + if self._type is not None: + raise ValueError("type already set") + + return SSHCertificateBuilder( + _public_key=self._public_key, + _serial=self._serial, + _type=type, + _key_id=self._key_id, + _valid_principals=self._valid_principals, + _valid_for_all_principals=self._valid_for_all_principals, + _valid_before=self._valid_before, + _valid_after=self._valid_after, + _critical_options=self._critical_options, + _extensions=self._extensions, + ) + + def key_id(self, key_id: bytes) -> SSHCertificateBuilder: + if not isinstance(key_id, bytes): + raise TypeError("key_id must be bytes") + if self._key_id is not None: + raise ValueError("key_id already set") + + return SSHCertificateBuilder( + _public_key=self._public_key, + _serial=self._serial, + _type=self._type, + _key_id=key_id, + _valid_principals=self._valid_principals, + _valid_for_all_principals=self._valid_for_all_principals, + _valid_before=self._valid_before, + _valid_after=self._valid_after, + _critical_options=self._critical_options, + _extensions=self._extensions, + ) + + def valid_principals( + self, valid_principals: list[bytes] + ) -> SSHCertificateBuilder: + if self._valid_for_all_principals: + raise ValueError( + "Principals can't be set because the cert is valid " + "for all principals" + ) + if ( + not all(isinstance(x, bytes) for x in valid_principals) + or not valid_principals + ): + raise TypeError( + "principals must be a list of bytes and can't be empty" + ) + if self._valid_principals: + raise ValueError("valid_principals already set") + + if len(valid_principals) > _SSHKEY_CERT_MAX_PRINCIPALS: + raise ValueError( + "Reached or exceeded the maximum number of valid_principals" + ) + + return SSHCertificateBuilder( + _public_key=self._public_key, + _serial=self._serial, + _type=self._type, + _key_id=self._key_id, + _valid_principals=valid_principals, + _valid_for_all_principals=self._valid_for_all_principals, + _valid_before=self._valid_before, + _valid_after=self._valid_after, + _critical_options=self._critical_options, + _extensions=self._extensions, + ) + + def valid_for_all_principals(self): + if self._valid_principals: + raise ValueError( + "valid_principals already set, can't set " + "valid_for_all_principals" + ) + if self._valid_for_all_principals: + raise ValueError("valid_for_all_principals already set") + + return SSHCertificateBuilder( + _public_key=self._public_key, + _serial=self._serial, + _type=self._type, + _key_id=self._key_id, + _valid_principals=self._valid_principals, + _valid_for_all_principals=True, + _valid_before=self._valid_before, + _valid_after=self._valid_after, + _critical_options=self._critical_options, + _extensions=self._extensions, + ) + + def valid_before(self, valid_before: int | float) -> SSHCertificateBuilder: + if not isinstance(valid_before, (int, float)): + raise TypeError("valid_before must be an int or float") + valid_before = int(valid_before) + if valid_before < 0 or valid_before >= 2**64: + raise ValueError("valid_before must [0, 2**64)") + if self._valid_before is not None: + raise ValueError("valid_before already set") + + return SSHCertificateBuilder( + _public_key=self._public_key, + _serial=self._serial, + _type=self._type, + _key_id=self._key_id, + _valid_principals=self._valid_principals, + _valid_for_all_principals=self._valid_for_all_principals, + _valid_before=valid_before, + _valid_after=self._valid_after, + _critical_options=self._critical_options, + _extensions=self._extensions, + ) + + def valid_after(self, valid_after: int | float) -> SSHCertificateBuilder: + if not isinstance(valid_after, (int, float)): + raise TypeError("valid_after must be an int or float") + valid_after = int(valid_after) + if valid_after < 0 or valid_after >= 2**64: + raise ValueError("valid_after must [0, 2**64)") + if self._valid_after is not None: + raise ValueError("valid_after already set") + + return SSHCertificateBuilder( + _public_key=self._public_key, + _serial=self._serial, + _type=self._type, + _key_id=self._key_id, + _valid_principals=self._valid_principals, + _valid_for_all_principals=self._valid_for_all_principals, + _valid_before=self._valid_before, + _valid_after=valid_after, + _critical_options=self._critical_options, + _extensions=self._extensions, + ) + + def add_critical_option( + self, name: bytes, value: bytes + ) -> SSHCertificateBuilder: + if not isinstance(name, bytes) or not isinstance(value, bytes): + raise TypeError("name and value must be bytes") + # This is O(n**2) + if name in [name for name, _ in self._critical_options]: + raise ValueError("Duplicate critical option name") + + return SSHCertificateBuilder( + _public_key=self._public_key, + _serial=self._serial, + _type=self._type, + _key_id=self._key_id, + _valid_principals=self._valid_principals, + _valid_for_all_principals=self._valid_for_all_principals, + _valid_before=self._valid_before, + _valid_after=self._valid_after, + _critical_options=[*self._critical_options, (name, value)], + _extensions=self._extensions, + ) + + def add_extension( + self, name: bytes, value: bytes + ) -> SSHCertificateBuilder: + if not isinstance(name, bytes) or not isinstance(value, bytes): + raise TypeError("name and value must be bytes") + # This is O(n**2) + if name in [name for name, _ in self._extensions]: + raise ValueError("Duplicate extension name") + + return SSHCertificateBuilder( + _public_key=self._public_key, + _serial=self._serial, + _type=self._type, + _key_id=self._key_id, + _valid_principals=self._valid_principals, + _valid_for_all_principals=self._valid_for_all_principals, + _valid_before=self._valid_before, + _valid_after=self._valid_after, + _critical_options=self._critical_options, + _extensions=[*self._extensions, (name, value)], + ) + + def sign(self, private_key: SSHCertPrivateKeyTypes) -> SSHCertificate: + if not isinstance( + private_key, + ( + ec.EllipticCurvePrivateKey, + rsa.RSAPrivateKey, + ed25519.Ed25519PrivateKey, + ), + ): + raise TypeError("Unsupported private key type") + + if self._public_key is None: + raise ValueError("public_key must be set") + + # Not required + serial = 0 if self._serial is None else self._serial + + if self._type is None: + raise ValueError("type must be set") + + # Not required + key_id = b"" if self._key_id is None else self._key_id + + # A zero length list is valid, but means the certificate + # is valid for any principal of the specified type. We require + # the user to explicitly set valid_for_all_principals to get + # that behavior. + if not self._valid_principals and not self._valid_for_all_principals: + raise ValueError( + "valid_principals must be set if valid_for_all_principals " + "is False" + ) + + if self._valid_before is None: + raise ValueError("valid_before must be set") + + if self._valid_after is None: + raise ValueError("valid_after must be set") + + if self._valid_after > self._valid_before: + raise ValueError("valid_after must be earlier than valid_before") + + # lexically sort our byte strings + self._critical_options.sort(key=lambda x: x[0]) + self._extensions.sort(key=lambda x: x[0]) + + key_type = _get_ssh_key_type(self._public_key) + cert_prefix = key_type + _CERT_SUFFIX + + # Marshal the bytes to be signed + nonce = os.urandom(32) + kformat = _lookup_kformat(key_type) + f = _FragList() + f.put_sshstr(cert_prefix) + f.put_sshstr(nonce) + kformat.encode_public(self._public_key, f) + f.put_u64(serial) + f.put_u32(self._type.value) + f.put_sshstr(key_id) + fprincipals = _FragList() + for p in self._valid_principals: + fprincipals.put_sshstr(p) + f.put_sshstr(fprincipals.tobytes()) + f.put_u64(self._valid_after) + f.put_u64(self._valid_before) + fcrit = _FragList() + for name, value in self._critical_options: + fcrit.put_sshstr(name) + if len(value) > 0: + foptval = _FragList() + foptval.put_sshstr(value) + fcrit.put_sshstr(foptval.tobytes()) + else: + fcrit.put_sshstr(value) + f.put_sshstr(fcrit.tobytes()) + fext = _FragList() + for name, value in self._extensions: + fext.put_sshstr(name) + if len(value) > 0: + fextval = _FragList() + fextval.put_sshstr(value) + fext.put_sshstr(fextval.tobytes()) + else: + fext.put_sshstr(value) + f.put_sshstr(fext.tobytes()) + f.put_sshstr(b"") # RESERVED FIELD + # encode CA public key + ca_type = _get_ssh_key_type(private_key) + caformat = _lookup_kformat(ca_type) + caf = _FragList() + caf.put_sshstr(ca_type) + caformat.encode_public(private_key.public_key(), caf) + f.put_sshstr(caf.tobytes()) + # Sigs according to the rules defined for the CA's public key + # (RFC4253 section 6.6 for ssh-rsa, RFC5656 for ECDSA, + # and RFC8032 for Ed25519). + if isinstance(private_key, ed25519.Ed25519PrivateKey): + signature = private_key.sign(f.tobytes()) + fsig = _FragList() + fsig.put_sshstr(ca_type) + fsig.put_sshstr(signature) + f.put_sshstr(fsig.tobytes()) + elif isinstance(private_key, ec.EllipticCurvePrivateKey): + hash_alg = _get_ec_hash_alg(private_key.curve) + signature = private_key.sign(f.tobytes(), ec.ECDSA(hash_alg)) + r, s = asym_utils.decode_dss_signature(signature) + fsig = _FragList() + fsig.put_sshstr(ca_type) + fsigblob = _FragList() + fsigblob.put_mpint(r) + fsigblob.put_mpint(s) + fsig.put_sshstr(fsigblob.tobytes()) + f.put_sshstr(fsig.tobytes()) + + else: + assert isinstance(private_key, rsa.RSAPrivateKey) + # Just like Golang, we're going to use SHA512 for RSA + # https://cs.opensource.google/go/x/crypto/+/refs/tags/ + # v0.4.0:ssh/certs.go;l=445 + # RFC 8332 defines SHA256 and 512 as options + fsig = _FragList() + fsig.put_sshstr(_SSH_RSA_SHA512) + signature = private_key.sign( + f.tobytes(), padding.PKCS1v15(), hashes.SHA512() + ) + fsig.put_sshstr(signature) + f.put_sshstr(fsig.tobytes()) + + cert_data = binascii.b2a_base64(f.tobytes()).strip() + # load_ssh_public_identity returns a union, but this is + # guaranteed to be an SSHCertificate, so we cast to make + # mypy happy. + return typing.cast( + SSHCertificate, + load_ssh_public_identity(b"".join([cert_prefix, b" ", cert_data])), + ) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/twofactor/__init__.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/twofactor/__init__.py new file mode 100644 index 00000000..c1af4230 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/twofactor/__init__.py @@ -0,0 +1,9 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + + +class InvalidToken(Exception): + pass diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/twofactor/hotp.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/twofactor/hotp.py new file mode 100644 index 00000000..af5ab6ef --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/twofactor/hotp.py @@ -0,0 +1,92 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import base64 +import typing +from urllib.parse import quote, urlencode + +from cryptography.hazmat.primitives import constant_time, hmac +from cryptography.hazmat.primitives.hashes import SHA1, SHA256, SHA512 +from cryptography.hazmat.primitives.twofactor import InvalidToken + +HOTPHashTypes = typing.Union[SHA1, SHA256, SHA512] + + +def _generate_uri( + hotp: HOTP, + type_name: str, + account_name: str, + issuer: str | None, + extra_parameters: list[tuple[str, int]], +) -> str: + parameters = [ + ("digits", hotp._length), + ("secret", base64.b32encode(hotp._key)), + ("algorithm", hotp._algorithm.name.upper()), + ] + + if issuer is not None: + parameters.append(("issuer", issuer)) + + parameters.extend(extra_parameters) + + label = ( + f"{quote(issuer)}:{quote(account_name)}" + if issuer + else quote(account_name) + ) + return f"otpauth://{type_name}/{label}?{urlencode(parameters)}" + + +class HOTP: + def __init__( + self, + key: bytes, + length: int, + algorithm: HOTPHashTypes, + backend: typing.Any = None, + enforce_key_length: bool = True, + ) -> None: + if len(key) < 16 and enforce_key_length is True: + raise ValueError("Key length has to be at least 128 bits.") + + if not isinstance(length, int): + raise TypeError("Length parameter must be an integer type.") + + if length < 6 or length > 8: + raise ValueError("Length of HOTP has to be between 6 and 8.") + + if not isinstance(algorithm, (SHA1, SHA256, SHA512)): + raise TypeError("Algorithm must be SHA1, SHA256 or SHA512.") + + self._key = key + self._length = length + self._algorithm = algorithm + + def generate(self, counter: int) -> bytes: + truncated_value = self._dynamic_truncate(counter) + hotp = truncated_value % (10**self._length) + return "{0:0{1}}".format(hotp, self._length).encode() + + def verify(self, hotp: bytes, counter: int) -> None: + if not constant_time.bytes_eq(self.generate(counter), hotp): + raise InvalidToken("Supplied HOTP value does not match.") + + def _dynamic_truncate(self, counter: int) -> int: + ctx = hmac.HMAC(self._key, self._algorithm) + ctx.update(counter.to_bytes(length=8, byteorder="big")) + hmac_value = ctx.finalize() + + offset = hmac_value[len(hmac_value) - 1] & 0b1111 + p = hmac_value[offset : offset + 4] + return int.from_bytes(p, byteorder="big") & 0x7FFFFFFF + + def get_provisioning_uri( + self, account_name: str, counter: int, issuer: str | None + ) -> str: + return _generate_uri( + self, "hotp", account_name, issuer, [("counter", int(counter))] + ) diff --git a/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/twofactor/totp.py b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/twofactor/totp.py new file mode 100644 index 00000000..68a50774 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/hazmat/primitives/twofactor/totp.py @@ -0,0 +1,50 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import typing + +from cryptography.hazmat.primitives import constant_time +from cryptography.hazmat.primitives.twofactor import InvalidToken +from cryptography.hazmat.primitives.twofactor.hotp import ( + HOTP, + HOTPHashTypes, + _generate_uri, +) + + +class TOTP: + def __init__( + self, + key: bytes, + length: int, + algorithm: HOTPHashTypes, + time_step: int, + backend: typing.Any = None, + enforce_key_length: bool = True, + ): + self._time_step = time_step + self._hotp = HOTP( + key, length, algorithm, enforce_key_length=enforce_key_length + ) + + def generate(self, time: int | float) -> bytes: + counter = int(time / self._time_step) + return self._hotp.generate(counter) + + def verify(self, totp: bytes, time: int) -> None: + if not constant_time.bytes_eq(self.generate(time), totp): + raise InvalidToken("Supplied TOTP value does not match.") + + def get_provisioning_uri( + self, account_name: str, issuer: str | None + ) -> str: + return _generate_uri( + self._hotp, + "totp", + account_name, + issuer, + [("period", int(self._time_step))], + ) diff --git a/templates/skills/file_manager/dependencies/cryptography/py.typed b/templates/skills/file_manager/dependencies/cryptography/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/templates/skills/file_manager/dependencies/cryptography/utils.py b/templates/skills/file_manager/dependencies/cryptography/utils.py new file mode 100644 index 00000000..706d0ae4 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/utils.py @@ -0,0 +1,127 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import enum +import sys +import types +import typing +import warnings + + +# We use a UserWarning subclass, instead of DeprecationWarning, because CPython +# decided deprecation warnings should be invisible by default. +class CryptographyDeprecationWarning(UserWarning): + pass + + +# Several APIs were deprecated with no specific end-of-life date because of the +# ubiquity of their use. They should not be removed until we agree on when that +# cycle ends. +DeprecatedIn36 = CryptographyDeprecationWarning +DeprecatedIn37 = CryptographyDeprecationWarning +DeprecatedIn40 = CryptographyDeprecationWarning +DeprecatedIn41 = CryptographyDeprecationWarning +DeprecatedIn42 = CryptographyDeprecationWarning +DeprecatedIn43 = CryptographyDeprecationWarning + + +def _check_bytes(name: str, value: bytes) -> None: + if not isinstance(value, bytes): + raise TypeError(f"{name} must be bytes") + + +def _check_byteslike(name: str, value: bytes) -> None: + try: + memoryview(value) + except TypeError: + raise TypeError(f"{name} must be bytes-like") + + +def int_to_bytes(integer: int, length: int | None = None) -> bytes: + if length == 0: + raise ValueError("length argument can't be 0") + return integer.to_bytes( + length or (integer.bit_length() + 7) // 8 or 1, "big" + ) + + +class InterfaceNotImplemented(Exception): + pass + + +class _DeprecatedValue: + def __init__(self, value: object, message: str, warning_class): + self.value = value + self.message = message + self.warning_class = warning_class + + +class _ModuleWithDeprecations(types.ModuleType): + def __init__(self, module: types.ModuleType): + super().__init__(module.__name__) + self.__dict__["_module"] = module + + def __getattr__(self, attr: str) -> object: + obj = getattr(self._module, attr) + if isinstance(obj, _DeprecatedValue): + warnings.warn(obj.message, obj.warning_class, stacklevel=2) + obj = obj.value + return obj + + def __setattr__(self, attr: str, value: object) -> None: + setattr(self._module, attr, value) + + def __delattr__(self, attr: str) -> None: + obj = getattr(self._module, attr) + if isinstance(obj, _DeprecatedValue): + warnings.warn(obj.message, obj.warning_class, stacklevel=2) + + delattr(self._module, attr) + + def __dir__(self) -> typing.Sequence[str]: + return ["_module", *dir(self._module)] + + +def deprecated( + value: object, + module_name: str, + message: str, + warning_class: type[Warning], + name: str | None = None, +) -> _DeprecatedValue: + module = sys.modules[module_name] + if not isinstance(module, _ModuleWithDeprecations): + sys.modules[module_name] = module = _ModuleWithDeprecations(module) + dv = _DeprecatedValue(value, message, warning_class) + # Maintain backwards compatibility with `name is None` for pyOpenSSL. + if name is not None: + setattr(module, name, dv) + return dv + + +def cached_property(func: typing.Callable) -> property: + cached_name = f"_cached_{func}" + sentinel = object() + + def inner(instance: object): + cache = getattr(instance, cached_name, sentinel) + if cache is not sentinel: + return cache + result = func(instance) + setattr(instance, cached_name, result) + return result + + return property(inner) + + +# Python 3.10 changed representation of enums. We use well-defined object +# representation and string representation from Python 3.9. +class Enum(enum.Enum): + def __repr__(self) -> str: + return f"<{self.__class__.__name__}.{self._name_}: {self._value_!r}>" + + def __str__(self) -> str: + return f"{self.__class__.__name__}.{self._name_}" diff --git a/templates/skills/file_manager/dependencies/cryptography/x509/__init__.py b/templates/skills/file_manager/dependencies/cryptography/x509/__init__.py new file mode 100644 index 00000000..26c6444c --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/x509/__init__.py @@ -0,0 +1,259 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.x509 import certificate_transparency, verification +from cryptography.x509.base import ( + Attribute, + AttributeNotFound, + Attributes, + Certificate, + CertificateBuilder, + CertificateRevocationList, + CertificateRevocationListBuilder, + CertificateSigningRequest, + CertificateSigningRequestBuilder, + InvalidVersion, + RevokedCertificate, + RevokedCertificateBuilder, + Version, + load_der_x509_certificate, + load_der_x509_crl, + load_der_x509_csr, + load_pem_x509_certificate, + load_pem_x509_certificates, + load_pem_x509_crl, + load_pem_x509_csr, + random_serial_number, +) +from cryptography.x509.extensions import ( + AccessDescription, + AuthorityInformationAccess, + AuthorityKeyIdentifier, + BasicConstraints, + CertificateIssuer, + CertificatePolicies, + CRLDistributionPoints, + CRLNumber, + CRLReason, + DeltaCRLIndicator, + DistributionPoint, + DuplicateExtension, + ExtendedKeyUsage, + Extension, + ExtensionNotFound, + Extensions, + ExtensionType, + FreshestCRL, + GeneralNames, + InhibitAnyPolicy, + InvalidityDate, + IssuerAlternativeName, + IssuingDistributionPoint, + KeyUsage, + MSCertificateTemplate, + NameConstraints, + NoticeReference, + OCSPAcceptableResponses, + OCSPNoCheck, + OCSPNonce, + PolicyConstraints, + PolicyInformation, + PrecertificateSignedCertificateTimestamps, + PrecertPoison, + ReasonFlags, + SignedCertificateTimestamps, + SubjectAlternativeName, + SubjectInformationAccess, + SubjectKeyIdentifier, + TLSFeature, + TLSFeatureType, + UnrecognizedExtension, + UserNotice, +) +from cryptography.x509.general_name import ( + DirectoryName, + DNSName, + GeneralName, + IPAddress, + OtherName, + RegisteredID, + RFC822Name, + UniformResourceIdentifier, + UnsupportedGeneralNameType, +) +from cryptography.x509.name import ( + Name, + NameAttribute, + RelativeDistinguishedName, +) +from cryptography.x509.oid import ( + AuthorityInformationAccessOID, + CertificatePoliciesOID, + CRLEntryExtensionOID, + ExtendedKeyUsageOID, + ExtensionOID, + NameOID, + ObjectIdentifier, + PublicKeyAlgorithmOID, + SignatureAlgorithmOID, +) + +OID_AUTHORITY_INFORMATION_ACCESS = ExtensionOID.AUTHORITY_INFORMATION_ACCESS +OID_AUTHORITY_KEY_IDENTIFIER = ExtensionOID.AUTHORITY_KEY_IDENTIFIER +OID_BASIC_CONSTRAINTS = ExtensionOID.BASIC_CONSTRAINTS +OID_CERTIFICATE_POLICIES = ExtensionOID.CERTIFICATE_POLICIES +OID_CRL_DISTRIBUTION_POINTS = ExtensionOID.CRL_DISTRIBUTION_POINTS +OID_EXTENDED_KEY_USAGE = ExtensionOID.EXTENDED_KEY_USAGE +OID_FRESHEST_CRL = ExtensionOID.FRESHEST_CRL +OID_INHIBIT_ANY_POLICY = ExtensionOID.INHIBIT_ANY_POLICY +OID_ISSUER_ALTERNATIVE_NAME = ExtensionOID.ISSUER_ALTERNATIVE_NAME +OID_KEY_USAGE = ExtensionOID.KEY_USAGE +OID_NAME_CONSTRAINTS = ExtensionOID.NAME_CONSTRAINTS +OID_OCSP_NO_CHECK = ExtensionOID.OCSP_NO_CHECK +OID_POLICY_CONSTRAINTS = ExtensionOID.POLICY_CONSTRAINTS +OID_POLICY_MAPPINGS = ExtensionOID.POLICY_MAPPINGS +OID_SUBJECT_ALTERNATIVE_NAME = ExtensionOID.SUBJECT_ALTERNATIVE_NAME +OID_SUBJECT_DIRECTORY_ATTRIBUTES = ExtensionOID.SUBJECT_DIRECTORY_ATTRIBUTES +OID_SUBJECT_INFORMATION_ACCESS = ExtensionOID.SUBJECT_INFORMATION_ACCESS +OID_SUBJECT_KEY_IDENTIFIER = ExtensionOID.SUBJECT_KEY_IDENTIFIER + +OID_DSA_WITH_SHA1 = SignatureAlgorithmOID.DSA_WITH_SHA1 +OID_DSA_WITH_SHA224 = SignatureAlgorithmOID.DSA_WITH_SHA224 +OID_DSA_WITH_SHA256 = SignatureAlgorithmOID.DSA_WITH_SHA256 +OID_ECDSA_WITH_SHA1 = SignatureAlgorithmOID.ECDSA_WITH_SHA1 +OID_ECDSA_WITH_SHA224 = SignatureAlgorithmOID.ECDSA_WITH_SHA224 +OID_ECDSA_WITH_SHA256 = SignatureAlgorithmOID.ECDSA_WITH_SHA256 +OID_ECDSA_WITH_SHA384 = SignatureAlgorithmOID.ECDSA_WITH_SHA384 +OID_ECDSA_WITH_SHA512 = SignatureAlgorithmOID.ECDSA_WITH_SHA512 +OID_RSA_WITH_MD5 = SignatureAlgorithmOID.RSA_WITH_MD5 +OID_RSA_WITH_SHA1 = SignatureAlgorithmOID.RSA_WITH_SHA1 +OID_RSA_WITH_SHA224 = SignatureAlgorithmOID.RSA_WITH_SHA224 +OID_RSA_WITH_SHA256 = SignatureAlgorithmOID.RSA_WITH_SHA256 +OID_RSA_WITH_SHA384 = SignatureAlgorithmOID.RSA_WITH_SHA384 +OID_RSA_WITH_SHA512 = SignatureAlgorithmOID.RSA_WITH_SHA512 +OID_RSASSA_PSS = SignatureAlgorithmOID.RSASSA_PSS + +OID_COMMON_NAME = NameOID.COMMON_NAME +OID_COUNTRY_NAME = NameOID.COUNTRY_NAME +OID_DOMAIN_COMPONENT = NameOID.DOMAIN_COMPONENT +OID_DN_QUALIFIER = NameOID.DN_QUALIFIER +OID_EMAIL_ADDRESS = NameOID.EMAIL_ADDRESS +OID_GENERATION_QUALIFIER = NameOID.GENERATION_QUALIFIER +OID_GIVEN_NAME = NameOID.GIVEN_NAME +OID_LOCALITY_NAME = NameOID.LOCALITY_NAME +OID_ORGANIZATIONAL_UNIT_NAME = NameOID.ORGANIZATIONAL_UNIT_NAME +OID_ORGANIZATION_NAME = NameOID.ORGANIZATION_NAME +OID_PSEUDONYM = NameOID.PSEUDONYM +OID_SERIAL_NUMBER = NameOID.SERIAL_NUMBER +OID_STATE_OR_PROVINCE_NAME = NameOID.STATE_OR_PROVINCE_NAME +OID_SURNAME = NameOID.SURNAME +OID_TITLE = NameOID.TITLE + +OID_CLIENT_AUTH = ExtendedKeyUsageOID.CLIENT_AUTH +OID_CODE_SIGNING = ExtendedKeyUsageOID.CODE_SIGNING +OID_EMAIL_PROTECTION = ExtendedKeyUsageOID.EMAIL_PROTECTION +OID_OCSP_SIGNING = ExtendedKeyUsageOID.OCSP_SIGNING +OID_SERVER_AUTH = ExtendedKeyUsageOID.SERVER_AUTH +OID_TIME_STAMPING = ExtendedKeyUsageOID.TIME_STAMPING + +OID_ANY_POLICY = CertificatePoliciesOID.ANY_POLICY +OID_CPS_QUALIFIER = CertificatePoliciesOID.CPS_QUALIFIER +OID_CPS_USER_NOTICE = CertificatePoliciesOID.CPS_USER_NOTICE + +OID_CERTIFICATE_ISSUER = CRLEntryExtensionOID.CERTIFICATE_ISSUER +OID_CRL_REASON = CRLEntryExtensionOID.CRL_REASON +OID_INVALIDITY_DATE = CRLEntryExtensionOID.INVALIDITY_DATE + +OID_CA_ISSUERS = AuthorityInformationAccessOID.CA_ISSUERS +OID_OCSP = AuthorityInformationAccessOID.OCSP + +__all__ = [ + "OID_CA_ISSUERS", + "OID_OCSP", + "AccessDescription", + "Attribute", + "AttributeNotFound", + "Attributes", + "AuthorityInformationAccess", + "AuthorityKeyIdentifier", + "BasicConstraints", + "CRLDistributionPoints", + "CRLNumber", + "CRLReason", + "Certificate", + "CertificateBuilder", + "CertificateIssuer", + "CertificatePolicies", + "CertificateRevocationList", + "CertificateRevocationListBuilder", + "CertificateSigningRequest", + "CertificateSigningRequestBuilder", + "DNSName", + "DeltaCRLIndicator", + "DirectoryName", + "DistributionPoint", + "DuplicateExtension", + "ExtendedKeyUsage", + "Extension", + "ExtensionNotFound", + "ExtensionType", + "Extensions", + "FreshestCRL", + "GeneralName", + "GeneralNames", + "IPAddress", + "InhibitAnyPolicy", + "InvalidVersion", + "InvalidityDate", + "IssuerAlternativeName", + "IssuingDistributionPoint", + "KeyUsage", + "MSCertificateTemplate", + "Name", + "NameAttribute", + "NameConstraints", + "NameOID", + "NoticeReference", + "OCSPAcceptableResponses", + "OCSPNoCheck", + "OCSPNonce", + "ObjectIdentifier", + "OtherName", + "PolicyConstraints", + "PolicyInformation", + "PrecertPoison", + "PrecertificateSignedCertificateTimestamps", + "PublicKeyAlgorithmOID", + "RFC822Name", + "ReasonFlags", + "RegisteredID", + "RelativeDistinguishedName", + "RevokedCertificate", + "RevokedCertificateBuilder", + "SignatureAlgorithmOID", + "SignedCertificateTimestamps", + "SubjectAlternativeName", + "SubjectInformationAccess", + "SubjectKeyIdentifier", + "TLSFeature", + "TLSFeatureType", + "UniformResourceIdentifier", + "UnrecognizedExtension", + "UnsupportedGeneralNameType", + "UserNotice", + "Version", + "certificate_transparency", + "load_der_x509_certificate", + "load_der_x509_crl", + "load_der_x509_csr", + "load_pem_x509_certificate", + "load_pem_x509_certificates", + "load_pem_x509_crl", + "load_pem_x509_csr", + "random_serial_number", + "verification", + "verification", +] diff --git a/templates/skills/file_manager/dependencies/cryptography/x509/base.py b/templates/skills/file_manager/dependencies/cryptography/x509/base.py new file mode 100644 index 00000000..6ed41e66 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/x509/base.py @@ -0,0 +1,1226 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc +import datetime +import os +import typing +import warnings + +from cryptography import utils +from cryptography.hazmat.bindings._rust import x509 as rust_x509 +from cryptography.hazmat.primitives import hashes, serialization +from cryptography.hazmat.primitives.asymmetric import ( + dsa, + ec, + ed448, + ed25519, + padding, + rsa, + x448, + x25519, +) +from cryptography.hazmat.primitives.asymmetric.types import ( + CertificateIssuerPrivateKeyTypes, + CertificateIssuerPublicKeyTypes, + CertificatePublicKeyTypes, +) +from cryptography.x509.extensions import ( + Extension, + Extensions, + ExtensionType, + _make_sequence_methods, +) +from cryptography.x509.name import Name, _ASN1Type +from cryptography.x509.oid import ObjectIdentifier + +_EARLIEST_UTC_TIME = datetime.datetime(1950, 1, 1) + +# This must be kept in sync with sign.rs's list of allowable types in +# identify_hash_type +_AllowedHashTypes = typing.Union[ + hashes.SHA224, + hashes.SHA256, + hashes.SHA384, + hashes.SHA512, + hashes.SHA3_224, + hashes.SHA3_256, + hashes.SHA3_384, + hashes.SHA3_512, +] + + +class AttributeNotFound(Exception): + def __init__(self, msg: str, oid: ObjectIdentifier) -> None: + super().__init__(msg) + self.oid = oid + + +def _reject_duplicate_extension( + extension: Extension[ExtensionType], + extensions: list[Extension[ExtensionType]], +) -> None: + # This is quadratic in the number of extensions + for e in extensions: + if e.oid == extension.oid: + raise ValueError("This extension has already been set.") + + +def _reject_duplicate_attribute( + oid: ObjectIdentifier, + attributes: list[tuple[ObjectIdentifier, bytes, int | None]], +) -> None: + # This is quadratic in the number of attributes + for attr_oid, _, _ in attributes: + if attr_oid == oid: + raise ValueError("This attribute has already been set.") + + +def _convert_to_naive_utc_time(time: datetime.datetime) -> datetime.datetime: + """Normalizes a datetime to a naive datetime in UTC. + + time -- datetime to normalize. Assumed to be in UTC if not timezone + aware. + """ + if time.tzinfo is not None: + offset = time.utcoffset() + offset = offset if offset else datetime.timedelta() + return time.replace(tzinfo=None) - offset + else: + return time + + +class Attribute: + def __init__( + self, + oid: ObjectIdentifier, + value: bytes, + _type: int = _ASN1Type.UTF8String.value, + ) -> None: + self._oid = oid + self._value = value + self._type = _type + + @property + def oid(self) -> ObjectIdentifier: + return self._oid + + @property + def value(self) -> bytes: + return self._value + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, Attribute): + return NotImplemented + + return ( + self.oid == other.oid + and self.value == other.value + and self._type == other._type + ) + + def __hash__(self) -> int: + return hash((self.oid, self.value, self._type)) + + +class Attributes: + def __init__( + self, + attributes: typing.Iterable[Attribute], + ) -> None: + self._attributes = list(attributes) + + __len__, __iter__, __getitem__ = _make_sequence_methods("_attributes") + + def __repr__(self) -> str: + return f"" + + def get_attribute_for_oid(self, oid: ObjectIdentifier) -> Attribute: + for attr in self: + if attr.oid == oid: + return attr + + raise AttributeNotFound(f"No {oid} attribute was found", oid) + + +class Version(utils.Enum): + v1 = 0 + v3 = 2 + + +class InvalidVersion(Exception): + def __init__(self, msg: str, parsed_version: int) -> None: + super().__init__(msg) + self.parsed_version = parsed_version + + +class Certificate(metaclass=abc.ABCMeta): + @abc.abstractmethod + def fingerprint(self, algorithm: hashes.HashAlgorithm) -> bytes: + """ + Returns bytes using digest passed. + """ + + @property + @abc.abstractmethod + def serial_number(self) -> int: + """ + Returns certificate serial number + """ + + @property + @abc.abstractmethod + def version(self) -> Version: + """ + Returns the certificate version + """ + + @abc.abstractmethod + def public_key(self) -> CertificatePublicKeyTypes: + """ + Returns the public key + """ + + @property + @abc.abstractmethod + def public_key_algorithm_oid(self) -> ObjectIdentifier: + """ + Returns the ObjectIdentifier of the public key. + """ + + @property + @abc.abstractmethod + def not_valid_before(self) -> datetime.datetime: + """ + Not before time (represented as UTC datetime) + """ + + @property + @abc.abstractmethod + def not_valid_before_utc(self) -> datetime.datetime: + """ + Not before time (represented as a non-naive UTC datetime) + """ + + @property + @abc.abstractmethod + def not_valid_after(self) -> datetime.datetime: + """ + Not after time (represented as UTC datetime) + """ + + @property + @abc.abstractmethod + def not_valid_after_utc(self) -> datetime.datetime: + """ + Not after time (represented as a non-naive UTC datetime) + """ + + @property + @abc.abstractmethod + def issuer(self) -> Name: + """ + Returns the issuer name object. + """ + + @property + @abc.abstractmethod + def subject(self) -> Name: + """ + Returns the subject name object. + """ + + @property + @abc.abstractmethod + def signature_hash_algorithm( + self, + ) -> hashes.HashAlgorithm | None: + """ + Returns a HashAlgorithm corresponding to the type of the digest signed + in the certificate. + """ + + @property + @abc.abstractmethod + def signature_algorithm_oid(self) -> ObjectIdentifier: + """ + Returns the ObjectIdentifier of the signature algorithm. + """ + + @property + @abc.abstractmethod + def signature_algorithm_parameters( + self, + ) -> None | padding.PSS | padding.PKCS1v15 | ec.ECDSA: + """ + Returns the signature algorithm parameters. + """ + + @property + @abc.abstractmethod + def extensions(self) -> Extensions: + """ + Returns an Extensions object. + """ + + @property + @abc.abstractmethod + def signature(self) -> bytes: + """ + Returns the signature bytes. + """ + + @property + @abc.abstractmethod + def tbs_certificate_bytes(self) -> bytes: + """ + Returns the tbsCertificate payload bytes as defined in RFC 5280. + """ + + @property + @abc.abstractmethod + def tbs_precertificate_bytes(self) -> bytes: + """ + Returns the tbsCertificate payload bytes with the SCT list extension + stripped. + """ + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + @abc.abstractmethod + def __hash__(self) -> int: + """ + Computes a hash. + """ + + @abc.abstractmethod + def public_bytes(self, encoding: serialization.Encoding) -> bytes: + """ + Serializes the certificate to PEM or DER format. + """ + + @abc.abstractmethod + def verify_directly_issued_by(self, issuer: Certificate) -> None: + """ + This method verifies that certificate issuer name matches the + issuer subject name and that the certificate is signed by the + issuer's private key. No other validation is performed. + """ + + +# Runtime isinstance checks need this since the rust class is not a subclass. +Certificate.register(rust_x509.Certificate) + + +class RevokedCertificate(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def serial_number(self) -> int: + """ + Returns the serial number of the revoked certificate. + """ + + @property + @abc.abstractmethod + def revocation_date(self) -> datetime.datetime: + """ + Returns the date of when this certificate was revoked. + """ + + @property + @abc.abstractmethod + def revocation_date_utc(self) -> datetime.datetime: + """ + Returns the date of when this certificate was revoked as a non-naive + UTC datetime. + """ + + @property + @abc.abstractmethod + def extensions(self) -> Extensions: + """ + Returns an Extensions object containing a list of Revoked extensions. + """ + + +# Runtime isinstance checks need this since the rust class is not a subclass. +RevokedCertificate.register(rust_x509.RevokedCertificate) + + +class _RawRevokedCertificate(RevokedCertificate): + def __init__( + self, + serial_number: int, + revocation_date: datetime.datetime, + extensions: Extensions, + ): + self._serial_number = serial_number + self._revocation_date = revocation_date + self._extensions = extensions + + @property + def serial_number(self) -> int: + return self._serial_number + + @property + def revocation_date(self) -> datetime.datetime: + warnings.warn( + "Properties that return a naïve datetime object have been " + "deprecated. Please switch to revocation_date_utc.", + utils.DeprecatedIn42, + stacklevel=2, + ) + return self._revocation_date + + @property + def revocation_date_utc(self) -> datetime.datetime: + return self._revocation_date.replace(tzinfo=datetime.timezone.utc) + + @property + def extensions(self) -> Extensions: + return self._extensions + + +class CertificateRevocationList(metaclass=abc.ABCMeta): + @abc.abstractmethod + def public_bytes(self, encoding: serialization.Encoding) -> bytes: + """ + Serializes the CRL to PEM or DER format. + """ + + @abc.abstractmethod + def fingerprint(self, algorithm: hashes.HashAlgorithm) -> bytes: + """ + Returns bytes using digest passed. + """ + + @abc.abstractmethod + def get_revoked_certificate_by_serial_number( + self, serial_number: int + ) -> RevokedCertificate | None: + """ + Returns an instance of RevokedCertificate or None if the serial_number + is not in the CRL. + """ + + @property + @abc.abstractmethod + def signature_hash_algorithm( + self, + ) -> hashes.HashAlgorithm | None: + """ + Returns a HashAlgorithm corresponding to the type of the digest signed + in the certificate. + """ + + @property + @abc.abstractmethod + def signature_algorithm_oid(self) -> ObjectIdentifier: + """ + Returns the ObjectIdentifier of the signature algorithm. + """ + + @property + @abc.abstractmethod + def signature_algorithm_parameters( + self, + ) -> None | padding.PSS | padding.PKCS1v15 | ec.ECDSA: + """ + Returns the signature algorithm parameters. + """ + + @property + @abc.abstractmethod + def issuer(self) -> Name: + """ + Returns the X509Name with the issuer of this CRL. + """ + + @property + @abc.abstractmethod + def next_update(self) -> datetime.datetime | None: + """ + Returns the date of next update for this CRL. + """ + + @property + @abc.abstractmethod + def next_update_utc(self) -> datetime.datetime | None: + """ + Returns the date of next update for this CRL as a non-naive UTC + datetime. + """ + + @property + @abc.abstractmethod + def last_update(self) -> datetime.datetime: + """ + Returns the date of last update for this CRL. + """ + + @property + @abc.abstractmethod + def last_update_utc(self) -> datetime.datetime: + """ + Returns the date of last update for this CRL as a non-naive UTC + datetime. + """ + + @property + @abc.abstractmethod + def extensions(self) -> Extensions: + """ + Returns an Extensions object containing a list of CRL extensions. + """ + + @property + @abc.abstractmethod + def signature(self) -> bytes: + """ + Returns the signature bytes. + """ + + @property + @abc.abstractmethod + def tbs_certlist_bytes(self) -> bytes: + """ + Returns the tbsCertList payload bytes as defined in RFC 5280. + """ + + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + @abc.abstractmethod + def __len__(self) -> int: + """ + Number of revoked certificates in the CRL. + """ + + @typing.overload + def __getitem__(self, idx: int) -> RevokedCertificate: ... + + @typing.overload + def __getitem__(self, idx: slice) -> list[RevokedCertificate]: ... + + @abc.abstractmethod + def __getitem__( + self, idx: int | slice + ) -> RevokedCertificate | list[RevokedCertificate]: + """ + Returns a revoked certificate (or slice of revoked certificates). + """ + + @abc.abstractmethod + def __iter__(self) -> typing.Iterator[RevokedCertificate]: + """ + Iterator over the revoked certificates + """ + + @abc.abstractmethod + def is_signature_valid( + self, public_key: CertificateIssuerPublicKeyTypes + ) -> bool: + """ + Verifies signature of revocation list against given public key. + """ + + +CertificateRevocationList.register(rust_x509.CertificateRevocationList) + + +class CertificateSigningRequest(metaclass=abc.ABCMeta): + @abc.abstractmethod + def __eq__(self, other: object) -> bool: + """ + Checks equality. + """ + + @abc.abstractmethod + def __hash__(self) -> int: + """ + Computes a hash. + """ + + @abc.abstractmethod + def public_key(self) -> CertificatePublicKeyTypes: + """ + Returns the public key + """ + + @property + @abc.abstractmethod + def subject(self) -> Name: + """ + Returns the subject name object. + """ + + @property + @abc.abstractmethod + def signature_hash_algorithm( + self, + ) -> hashes.HashAlgorithm | None: + """ + Returns a HashAlgorithm corresponding to the type of the digest signed + in the certificate. + """ + + @property + @abc.abstractmethod + def signature_algorithm_oid(self) -> ObjectIdentifier: + """ + Returns the ObjectIdentifier of the signature algorithm. + """ + + @property + @abc.abstractmethod + def signature_algorithm_parameters( + self, + ) -> None | padding.PSS | padding.PKCS1v15 | ec.ECDSA: + """ + Returns the signature algorithm parameters. + """ + + @property + @abc.abstractmethod + def extensions(self) -> Extensions: + """ + Returns the extensions in the signing request. + """ + + @property + @abc.abstractmethod + def attributes(self) -> Attributes: + """ + Returns an Attributes object. + """ + + @abc.abstractmethod + def public_bytes(self, encoding: serialization.Encoding) -> bytes: + """ + Encodes the request to PEM or DER format. + """ + + @property + @abc.abstractmethod + def signature(self) -> bytes: + """ + Returns the signature bytes. + """ + + @property + @abc.abstractmethod + def tbs_certrequest_bytes(self) -> bytes: + """ + Returns the PKCS#10 CertificationRequestInfo bytes as defined in RFC + 2986. + """ + + @property + @abc.abstractmethod + def is_signature_valid(self) -> bool: + """ + Verifies signature of signing request. + """ + + @abc.abstractmethod + def get_attribute_for_oid(self, oid: ObjectIdentifier) -> bytes: + """ + Get the attribute value for a given OID. + """ + + +# Runtime isinstance checks need this since the rust class is not a subclass. +CertificateSigningRequest.register(rust_x509.CertificateSigningRequest) + + +load_pem_x509_certificate = rust_x509.load_pem_x509_certificate +load_der_x509_certificate = rust_x509.load_der_x509_certificate + +load_pem_x509_certificates = rust_x509.load_pem_x509_certificates + +load_pem_x509_csr = rust_x509.load_pem_x509_csr +load_der_x509_csr = rust_x509.load_der_x509_csr + +load_pem_x509_crl = rust_x509.load_pem_x509_crl +load_der_x509_crl = rust_x509.load_der_x509_crl + + +class CertificateSigningRequestBuilder: + def __init__( + self, + subject_name: Name | None = None, + extensions: list[Extension[ExtensionType]] = [], + attributes: list[tuple[ObjectIdentifier, bytes, int | None]] = [], + ): + """ + Creates an empty X.509 certificate request (v1). + """ + self._subject_name = subject_name + self._extensions = extensions + self._attributes = attributes + + def subject_name(self, name: Name) -> CertificateSigningRequestBuilder: + """ + Sets the certificate requestor's distinguished name. + """ + if not isinstance(name, Name): + raise TypeError("Expecting x509.Name object.") + if self._subject_name is not None: + raise ValueError("The subject name may only be set once.") + return CertificateSigningRequestBuilder( + name, self._extensions, self._attributes + ) + + def add_extension( + self, extval: ExtensionType, critical: bool + ) -> CertificateSigningRequestBuilder: + """ + Adds an X.509 extension to the certificate request. + """ + if not isinstance(extval, ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + + return CertificateSigningRequestBuilder( + self._subject_name, + [*self._extensions, extension], + self._attributes, + ) + + def add_attribute( + self, + oid: ObjectIdentifier, + value: bytes, + *, + _tag: _ASN1Type | None = None, + ) -> CertificateSigningRequestBuilder: + """ + Adds an X.509 attribute with an OID and associated value. + """ + if not isinstance(oid, ObjectIdentifier): + raise TypeError("oid must be an ObjectIdentifier") + + if not isinstance(value, bytes): + raise TypeError("value must be bytes") + + if _tag is not None and not isinstance(_tag, _ASN1Type): + raise TypeError("tag must be _ASN1Type") + + _reject_duplicate_attribute(oid, self._attributes) + + if _tag is not None: + tag = _tag.value + else: + tag = None + + return CertificateSigningRequestBuilder( + self._subject_name, + self._extensions, + [*self._attributes, (oid, value, tag)], + ) + + def sign( + self, + private_key: CertificateIssuerPrivateKeyTypes, + algorithm: _AllowedHashTypes | None, + backend: typing.Any = None, + *, + rsa_padding: padding.PSS | padding.PKCS1v15 | None = None, + ) -> CertificateSigningRequest: + """ + Signs the request using the requestor's private key. + """ + if self._subject_name is None: + raise ValueError("A CertificateSigningRequest must have a subject") + + if rsa_padding is not None: + if not isinstance(rsa_padding, (padding.PSS, padding.PKCS1v15)): + raise TypeError("Padding must be PSS or PKCS1v15") + if not isinstance(private_key, rsa.RSAPrivateKey): + raise TypeError("Padding is only supported for RSA keys") + + return rust_x509.create_x509_csr( + self, private_key, algorithm, rsa_padding + ) + + +class CertificateBuilder: + _extensions: list[Extension[ExtensionType]] + + def __init__( + self, + issuer_name: Name | None = None, + subject_name: Name | None = None, + public_key: CertificatePublicKeyTypes | None = None, + serial_number: int | None = None, + not_valid_before: datetime.datetime | None = None, + not_valid_after: datetime.datetime | None = None, + extensions: list[Extension[ExtensionType]] = [], + ) -> None: + self._version = Version.v3 + self._issuer_name = issuer_name + self._subject_name = subject_name + self._public_key = public_key + self._serial_number = serial_number + self._not_valid_before = not_valid_before + self._not_valid_after = not_valid_after + self._extensions = extensions + + def issuer_name(self, name: Name) -> CertificateBuilder: + """ + Sets the CA's distinguished name. + """ + if not isinstance(name, Name): + raise TypeError("Expecting x509.Name object.") + if self._issuer_name is not None: + raise ValueError("The issuer name may only be set once.") + return CertificateBuilder( + name, + self._subject_name, + self._public_key, + self._serial_number, + self._not_valid_before, + self._not_valid_after, + self._extensions, + ) + + def subject_name(self, name: Name) -> CertificateBuilder: + """ + Sets the requestor's distinguished name. + """ + if not isinstance(name, Name): + raise TypeError("Expecting x509.Name object.") + if self._subject_name is not None: + raise ValueError("The subject name may only be set once.") + return CertificateBuilder( + self._issuer_name, + name, + self._public_key, + self._serial_number, + self._not_valid_before, + self._not_valid_after, + self._extensions, + ) + + def public_key( + self, + key: CertificatePublicKeyTypes, + ) -> CertificateBuilder: + """ + Sets the requestor's public key (as found in the signing request). + """ + if not isinstance( + key, + ( + dsa.DSAPublicKey, + rsa.RSAPublicKey, + ec.EllipticCurvePublicKey, + ed25519.Ed25519PublicKey, + ed448.Ed448PublicKey, + x25519.X25519PublicKey, + x448.X448PublicKey, + ), + ): + raise TypeError( + "Expecting one of DSAPublicKey, RSAPublicKey," + " EllipticCurvePublicKey, Ed25519PublicKey," + " Ed448PublicKey, X25519PublicKey, or " + "X448PublicKey." + ) + if self._public_key is not None: + raise ValueError("The public key may only be set once.") + return CertificateBuilder( + self._issuer_name, + self._subject_name, + key, + self._serial_number, + self._not_valid_before, + self._not_valid_after, + self._extensions, + ) + + def serial_number(self, number: int) -> CertificateBuilder: + """ + Sets the certificate serial number. + """ + if not isinstance(number, int): + raise TypeError("Serial number must be of integral type.") + if self._serial_number is not None: + raise ValueError("The serial number may only be set once.") + if number <= 0: + raise ValueError("The serial number should be positive.") + + # ASN.1 integers are always signed, so most significant bit must be + # zero. + if number.bit_length() >= 160: # As defined in RFC 5280 + raise ValueError( + "The serial number should not be more than 159 bits." + ) + return CertificateBuilder( + self._issuer_name, + self._subject_name, + self._public_key, + number, + self._not_valid_before, + self._not_valid_after, + self._extensions, + ) + + def not_valid_before(self, time: datetime.datetime) -> CertificateBuilder: + """ + Sets the certificate activation time. + """ + if not isinstance(time, datetime.datetime): + raise TypeError("Expecting datetime object.") + if self._not_valid_before is not None: + raise ValueError("The not valid before may only be set once.") + time = _convert_to_naive_utc_time(time) + if time < _EARLIEST_UTC_TIME: + raise ValueError( + "The not valid before date must be on or after" + " 1950 January 1)." + ) + if self._not_valid_after is not None and time > self._not_valid_after: + raise ValueError( + "The not valid before date must be before the not valid after " + "date." + ) + return CertificateBuilder( + self._issuer_name, + self._subject_name, + self._public_key, + self._serial_number, + time, + self._not_valid_after, + self._extensions, + ) + + def not_valid_after(self, time: datetime.datetime) -> CertificateBuilder: + """ + Sets the certificate expiration time. + """ + if not isinstance(time, datetime.datetime): + raise TypeError("Expecting datetime object.") + if self._not_valid_after is not None: + raise ValueError("The not valid after may only be set once.") + time = _convert_to_naive_utc_time(time) + if time < _EARLIEST_UTC_TIME: + raise ValueError( + "The not valid after date must be on or after" + " 1950 January 1." + ) + if ( + self._not_valid_before is not None + and time < self._not_valid_before + ): + raise ValueError( + "The not valid after date must be after the not valid before " + "date." + ) + return CertificateBuilder( + self._issuer_name, + self._subject_name, + self._public_key, + self._serial_number, + self._not_valid_before, + time, + self._extensions, + ) + + def add_extension( + self, extval: ExtensionType, critical: bool + ) -> CertificateBuilder: + """ + Adds an X.509 extension to the certificate. + """ + if not isinstance(extval, ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + + return CertificateBuilder( + self._issuer_name, + self._subject_name, + self._public_key, + self._serial_number, + self._not_valid_before, + self._not_valid_after, + [*self._extensions, extension], + ) + + def sign( + self, + private_key: CertificateIssuerPrivateKeyTypes, + algorithm: _AllowedHashTypes | None, + backend: typing.Any = None, + *, + rsa_padding: padding.PSS | padding.PKCS1v15 | None = None, + ) -> Certificate: + """ + Signs the certificate using the CA's private key. + """ + if self._subject_name is None: + raise ValueError("A certificate must have a subject name") + + if self._issuer_name is None: + raise ValueError("A certificate must have an issuer name") + + if self._serial_number is None: + raise ValueError("A certificate must have a serial number") + + if self._not_valid_before is None: + raise ValueError("A certificate must have a not valid before time") + + if self._not_valid_after is None: + raise ValueError("A certificate must have a not valid after time") + + if self._public_key is None: + raise ValueError("A certificate must have a public key") + + if rsa_padding is not None: + if not isinstance(rsa_padding, (padding.PSS, padding.PKCS1v15)): + raise TypeError("Padding must be PSS or PKCS1v15") + if not isinstance(private_key, rsa.RSAPrivateKey): + raise TypeError("Padding is only supported for RSA keys") + + return rust_x509.create_x509_certificate( + self, private_key, algorithm, rsa_padding + ) + + +class CertificateRevocationListBuilder: + _extensions: list[Extension[ExtensionType]] + _revoked_certificates: list[RevokedCertificate] + + def __init__( + self, + issuer_name: Name | None = None, + last_update: datetime.datetime | None = None, + next_update: datetime.datetime | None = None, + extensions: list[Extension[ExtensionType]] = [], + revoked_certificates: list[RevokedCertificate] = [], + ): + self._issuer_name = issuer_name + self._last_update = last_update + self._next_update = next_update + self._extensions = extensions + self._revoked_certificates = revoked_certificates + + def issuer_name( + self, issuer_name: Name + ) -> CertificateRevocationListBuilder: + if not isinstance(issuer_name, Name): + raise TypeError("Expecting x509.Name object.") + if self._issuer_name is not None: + raise ValueError("The issuer name may only be set once.") + return CertificateRevocationListBuilder( + issuer_name, + self._last_update, + self._next_update, + self._extensions, + self._revoked_certificates, + ) + + def last_update( + self, last_update: datetime.datetime + ) -> CertificateRevocationListBuilder: + if not isinstance(last_update, datetime.datetime): + raise TypeError("Expecting datetime object.") + if self._last_update is not None: + raise ValueError("Last update may only be set once.") + last_update = _convert_to_naive_utc_time(last_update) + if last_update < _EARLIEST_UTC_TIME: + raise ValueError( + "The last update date must be on or after 1950 January 1." + ) + if self._next_update is not None and last_update > self._next_update: + raise ValueError( + "The last update date must be before the next update date." + ) + return CertificateRevocationListBuilder( + self._issuer_name, + last_update, + self._next_update, + self._extensions, + self._revoked_certificates, + ) + + def next_update( + self, next_update: datetime.datetime + ) -> CertificateRevocationListBuilder: + if not isinstance(next_update, datetime.datetime): + raise TypeError("Expecting datetime object.") + if self._next_update is not None: + raise ValueError("Last update may only be set once.") + next_update = _convert_to_naive_utc_time(next_update) + if next_update < _EARLIEST_UTC_TIME: + raise ValueError( + "The last update date must be on or after 1950 January 1." + ) + if self._last_update is not None and next_update < self._last_update: + raise ValueError( + "The next update date must be after the last update date." + ) + return CertificateRevocationListBuilder( + self._issuer_name, + self._last_update, + next_update, + self._extensions, + self._revoked_certificates, + ) + + def add_extension( + self, extval: ExtensionType, critical: bool + ) -> CertificateRevocationListBuilder: + """ + Adds an X.509 extension to the certificate revocation list. + """ + if not isinstance(extval, ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + return CertificateRevocationListBuilder( + self._issuer_name, + self._last_update, + self._next_update, + [*self._extensions, extension], + self._revoked_certificates, + ) + + def add_revoked_certificate( + self, revoked_certificate: RevokedCertificate + ) -> CertificateRevocationListBuilder: + """ + Adds a revoked certificate to the CRL. + """ + if not isinstance(revoked_certificate, RevokedCertificate): + raise TypeError("Must be an instance of RevokedCertificate") + + return CertificateRevocationListBuilder( + self._issuer_name, + self._last_update, + self._next_update, + self._extensions, + [*self._revoked_certificates, revoked_certificate], + ) + + def sign( + self, + private_key: CertificateIssuerPrivateKeyTypes, + algorithm: _AllowedHashTypes | None, + backend: typing.Any = None, + *, + rsa_padding: padding.PSS | padding.PKCS1v15 | None = None, + ) -> CertificateRevocationList: + if self._issuer_name is None: + raise ValueError("A CRL must have an issuer name") + + if self._last_update is None: + raise ValueError("A CRL must have a last update time") + + if self._next_update is None: + raise ValueError("A CRL must have a next update time") + + if rsa_padding is not None: + if not isinstance(rsa_padding, (padding.PSS, padding.PKCS1v15)): + raise TypeError("Padding must be PSS or PKCS1v15") + if not isinstance(private_key, rsa.RSAPrivateKey): + raise TypeError("Padding is only supported for RSA keys") + + return rust_x509.create_x509_crl( + self, private_key, algorithm, rsa_padding + ) + + +class RevokedCertificateBuilder: + def __init__( + self, + serial_number: int | None = None, + revocation_date: datetime.datetime | None = None, + extensions: list[Extension[ExtensionType]] = [], + ): + self._serial_number = serial_number + self._revocation_date = revocation_date + self._extensions = extensions + + def serial_number(self, number: int) -> RevokedCertificateBuilder: + if not isinstance(number, int): + raise TypeError("Serial number must be of integral type.") + if self._serial_number is not None: + raise ValueError("The serial number may only be set once.") + if number <= 0: + raise ValueError("The serial number should be positive") + + # ASN.1 integers are always signed, so most significant bit must be + # zero. + if number.bit_length() >= 160: # As defined in RFC 5280 + raise ValueError( + "The serial number should not be more than 159 bits." + ) + return RevokedCertificateBuilder( + number, self._revocation_date, self._extensions + ) + + def revocation_date( + self, time: datetime.datetime + ) -> RevokedCertificateBuilder: + if not isinstance(time, datetime.datetime): + raise TypeError("Expecting datetime object.") + if self._revocation_date is not None: + raise ValueError("The revocation date may only be set once.") + time = _convert_to_naive_utc_time(time) + if time < _EARLIEST_UTC_TIME: + raise ValueError( + "The revocation date must be on or after 1950 January 1." + ) + return RevokedCertificateBuilder( + self._serial_number, time, self._extensions + ) + + def add_extension( + self, extval: ExtensionType, critical: bool + ) -> RevokedCertificateBuilder: + if not isinstance(extval, ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + return RevokedCertificateBuilder( + self._serial_number, + self._revocation_date, + [*self._extensions, extension], + ) + + def build(self, backend: typing.Any = None) -> RevokedCertificate: + if self._serial_number is None: + raise ValueError("A revoked certificate must have a serial number") + if self._revocation_date is None: + raise ValueError( + "A revoked certificate must have a revocation date" + ) + return _RawRevokedCertificate( + self._serial_number, + self._revocation_date, + Extensions(self._extensions), + ) + + +def random_serial_number() -> int: + return int.from_bytes(os.urandom(20), "big") >> 1 diff --git a/templates/skills/file_manager/dependencies/cryptography/x509/certificate_transparency.py b/templates/skills/file_manager/dependencies/cryptography/x509/certificate_transparency.py new file mode 100644 index 00000000..73647ee7 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/x509/certificate_transparency.py @@ -0,0 +1,97 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc +import datetime + +from cryptography import utils +from cryptography.hazmat.bindings._rust import x509 as rust_x509 +from cryptography.hazmat.primitives.hashes import HashAlgorithm + + +class LogEntryType(utils.Enum): + X509_CERTIFICATE = 0 + PRE_CERTIFICATE = 1 + + +class Version(utils.Enum): + v1 = 0 + + +class SignatureAlgorithm(utils.Enum): + """ + Signature algorithms that are valid for SCTs. + + These are exactly the same as SignatureAlgorithm in RFC 5246 (TLS 1.2). + + See: + """ + + ANONYMOUS = 0 + RSA = 1 + DSA = 2 + ECDSA = 3 + + +class SignedCertificateTimestamp(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def version(self) -> Version: + """ + Returns the SCT version. + """ + + @property + @abc.abstractmethod + def log_id(self) -> bytes: + """ + Returns an identifier indicating which log this SCT is for. + """ + + @property + @abc.abstractmethod + def timestamp(self) -> datetime.datetime: + """ + Returns the timestamp for this SCT. + """ + + @property + @abc.abstractmethod + def entry_type(self) -> LogEntryType: + """ + Returns whether this is an SCT for a certificate or pre-certificate. + """ + + @property + @abc.abstractmethod + def signature_hash_algorithm(self) -> HashAlgorithm: + """ + Returns the hash algorithm used for the SCT's signature. + """ + + @property + @abc.abstractmethod + def signature_algorithm(self) -> SignatureAlgorithm: + """ + Returns the signing algorithm used for the SCT's signature. + """ + + @property + @abc.abstractmethod + def signature(self) -> bytes: + """ + Returns the signature for this SCT. + """ + + @property + @abc.abstractmethod + def extension_bytes(self) -> bytes: + """ + Returns the raw bytes of any extensions for this SCT. + """ + + +SignedCertificateTimestamp.register(rust_x509.Sct) diff --git a/templates/skills/file_manager/dependencies/cryptography/x509/extensions.py b/templates/skills/file_manager/dependencies/cryptography/x509/extensions.py new file mode 100644 index 00000000..5e7486a5 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/x509/extensions.py @@ -0,0 +1,2196 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc +import datetime +import hashlib +import ipaddress +import typing + +from cryptography import utils +from cryptography.hazmat.bindings._rust import asn1 +from cryptography.hazmat.bindings._rust import x509 as rust_x509 +from cryptography.hazmat.primitives import constant_time, serialization +from cryptography.hazmat.primitives.asymmetric.ec import EllipticCurvePublicKey +from cryptography.hazmat.primitives.asymmetric.rsa import RSAPublicKey +from cryptography.hazmat.primitives.asymmetric.types import ( + CertificateIssuerPublicKeyTypes, + CertificatePublicKeyTypes, +) +from cryptography.x509.certificate_transparency import ( + SignedCertificateTimestamp, +) +from cryptography.x509.general_name import ( + DirectoryName, + DNSName, + GeneralName, + IPAddress, + OtherName, + RegisteredID, + RFC822Name, + UniformResourceIdentifier, + _IPAddressTypes, +) +from cryptography.x509.name import Name, RelativeDistinguishedName +from cryptography.x509.oid import ( + CRLEntryExtensionOID, + ExtensionOID, + ObjectIdentifier, + OCSPExtensionOID, +) + +ExtensionTypeVar = typing.TypeVar( + "ExtensionTypeVar", bound="ExtensionType", covariant=True +) + + +def _key_identifier_from_public_key( + public_key: CertificatePublicKeyTypes, +) -> bytes: + if isinstance(public_key, RSAPublicKey): + data = public_key.public_bytes( + serialization.Encoding.DER, + serialization.PublicFormat.PKCS1, + ) + elif isinstance(public_key, EllipticCurvePublicKey): + data = public_key.public_bytes( + serialization.Encoding.X962, + serialization.PublicFormat.UncompressedPoint, + ) + else: + # This is a very slow way to do this. + serialized = public_key.public_bytes( + serialization.Encoding.DER, + serialization.PublicFormat.SubjectPublicKeyInfo, + ) + data = asn1.parse_spki_for_data(serialized) + + return hashlib.sha1(data).digest() + + +def _make_sequence_methods(field_name: str): + def len_method(self) -> int: + return len(getattr(self, field_name)) + + def iter_method(self): + return iter(getattr(self, field_name)) + + def getitem_method(self, idx): + return getattr(self, field_name)[idx] + + return len_method, iter_method, getitem_method + + +class DuplicateExtension(Exception): + def __init__(self, msg: str, oid: ObjectIdentifier) -> None: + super().__init__(msg) + self.oid = oid + + +class ExtensionNotFound(Exception): + def __init__(self, msg: str, oid: ObjectIdentifier) -> None: + super().__init__(msg) + self.oid = oid + + +class ExtensionType(metaclass=abc.ABCMeta): + oid: typing.ClassVar[ObjectIdentifier] + + def public_bytes(self) -> bytes: + """ + Serializes the extension type to DER. + """ + raise NotImplementedError( + f"public_bytes is not implemented for extension type {self!r}" + ) + + +class Extensions: + def __init__( + self, extensions: typing.Iterable[Extension[ExtensionType]] + ) -> None: + self._extensions = list(extensions) + + def get_extension_for_oid( + self, oid: ObjectIdentifier + ) -> Extension[ExtensionType]: + for ext in self: + if ext.oid == oid: + return ext + + raise ExtensionNotFound(f"No {oid} extension was found", oid) + + def get_extension_for_class( + self, extclass: type[ExtensionTypeVar] + ) -> Extension[ExtensionTypeVar]: + if extclass is UnrecognizedExtension: + raise TypeError( + "UnrecognizedExtension can't be used with " + "get_extension_for_class because more than one instance of the" + " class may be present." + ) + + for ext in self: + if isinstance(ext.value, extclass): + return ext + + raise ExtensionNotFound( + f"No {extclass} extension was found", extclass.oid + ) + + __len__, __iter__, __getitem__ = _make_sequence_methods("_extensions") + + def __repr__(self) -> str: + return f"" + + +class CRLNumber(ExtensionType): + oid = ExtensionOID.CRL_NUMBER + + def __init__(self, crl_number: int) -> None: + if not isinstance(crl_number, int): + raise TypeError("crl_number must be an integer") + + self._crl_number = crl_number + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CRLNumber): + return NotImplemented + + return self.crl_number == other.crl_number + + def __hash__(self) -> int: + return hash(self.crl_number) + + def __repr__(self) -> str: + return f"" + + @property + def crl_number(self) -> int: + return self._crl_number + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class AuthorityKeyIdentifier(ExtensionType): + oid = ExtensionOID.AUTHORITY_KEY_IDENTIFIER + + def __init__( + self, + key_identifier: bytes | None, + authority_cert_issuer: typing.Iterable[GeneralName] | None, + authority_cert_serial_number: int | None, + ) -> None: + if (authority_cert_issuer is None) != ( + authority_cert_serial_number is None + ): + raise ValueError( + "authority_cert_issuer and authority_cert_serial_number " + "must both be present or both None" + ) + + if authority_cert_issuer is not None: + authority_cert_issuer = list(authority_cert_issuer) + if not all( + isinstance(x, GeneralName) for x in authority_cert_issuer + ): + raise TypeError( + "authority_cert_issuer must be a list of GeneralName " + "objects" + ) + + if authority_cert_serial_number is not None and not isinstance( + authority_cert_serial_number, int + ): + raise TypeError("authority_cert_serial_number must be an integer") + + self._key_identifier = key_identifier + self._authority_cert_issuer = authority_cert_issuer + self._authority_cert_serial_number = authority_cert_serial_number + + # This takes a subset of CertificatePublicKeyTypes because an issuer + # cannot have an X25519/X448 key. This introduces some unfortunate + # asymmetry that requires typing users to explicitly + # narrow their type, but we should make this accurate and not just + # convenient. + @classmethod + def from_issuer_public_key( + cls, public_key: CertificateIssuerPublicKeyTypes + ) -> AuthorityKeyIdentifier: + digest = _key_identifier_from_public_key(public_key) + return cls( + key_identifier=digest, + authority_cert_issuer=None, + authority_cert_serial_number=None, + ) + + @classmethod + def from_issuer_subject_key_identifier( + cls, ski: SubjectKeyIdentifier + ) -> AuthorityKeyIdentifier: + return cls( + key_identifier=ski.digest, + authority_cert_issuer=None, + authority_cert_serial_number=None, + ) + + def __repr__(self) -> str: + return ( + f"" + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, AuthorityKeyIdentifier): + return NotImplemented + + return ( + self.key_identifier == other.key_identifier + and self.authority_cert_issuer == other.authority_cert_issuer + and self.authority_cert_serial_number + == other.authority_cert_serial_number + ) + + def __hash__(self) -> int: + if self.authority_cert_issuer is None: + aci = None + else: + aci = tuple(self.authority_cert_issuer) + return hash( + (self.key_identifier, aci, self.authority_cert_serial_number) + ) + + @property + def key_identifier(self) -> bytes | None: + return self._key_identifier + + @property + def authority_cert_issuer( + self, + ) -> list[GeneralName] | None: + return self._authority_cert_issuer + + @property + def authority_cert_serial_number(self) -> int | None: + return self._authority_cert_serial_number + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class SubjectKeyIdentifier(ExtensionType): + oid = ExtensionOID.SUBJECT_KEY_IDENTIFIER + + def __init__(self, digest: bytes) -> None: + self._digest = digest + + @classmethod + def from_public_key( + cls, public_key: CertificatePublicKeyTypes + ) -> SubjectKeyIdentifier: + return cls(_key_identifier_from_public_key(public_key)) + + @property + def digest(self) -> bytes: + return self._digest + + @property + def key_identifier(self) -> bytes: + return self._digest + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, SubjectKeyIdentifier): + return NotImplemented + + return constant_time.bytes_eq(self.digest, other.digest) + + def __hash__(self) -> int: + return hash(self.digest) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class AuthorityInformationAccess(ExtensionType): + oid = ExtensionOID.AUTHORITY_INFORMATION_ACCESS + + def __init__( + self, descriptions: typing.Iterable[AccessDescription] + ) -> None: + descriptions = list(descriptions) + if not all(isinstance(x, AccessDescription) for x in descriptions): + raise TypeError( + "Every item in the descriptions list must be an " + "AccessDescription" + ) + + self._descriptions = descriptions + + __len__, __iter__, __getitem__ = _make_sequence_methods("_descriptions") + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, AuthorityInformationAccess): + return NotImplemented + + return self._descriptions == other._descriptions + + def __hash__(self) -> int: + return hash(tuple(self._descriptions)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class SubjectInformationAccess(ExtensionType): + oid = ExtensionOID.SUBJECT_INFORMATION_ACCESS + + def __init__( + self, descriptions: typing.Iterable[AccessDescription] + ) -> None: + descriptions = list(descriptions) + if not all(isinstance(x, AccessDescription) for x in descriptions): + raise TypeError( + "Every item in the descriptions list must be an " + "AccessDescription" + ) + + self._descriptions = descriptions + + __len__, __iter__, __getitem__ = _make_sequence_methods("_descriptions") + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, SubjectInformationAccess): + return NotImplemented + + return self._descriptions == other._descriptions + + def __hash__(self) -> int: + return hash(tuple(self._descriptions)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class AccessDescription: + def __init__( + self, access_method: ObjectIdentifier, access_location: GeneralName + ) -> None: + if not isinstance(access_method, ObjectIdentifier): + raise TypeError("access_method must be an ObjectIdentifier") + + if not isinstance(access_location, GeneralName): + raise TypeError("access_location must be a GeneralName") + + self._access_method = access_method + self._access_location = access_location + + def __repr__(self) -> str: + return ( + f"" + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, AccessDescription): + return NotImplemented + + return ( + self.access_method == other.access_method + and self.access_location == other.access_location + ) + + def __hash__(self) -> int: + return hash((self.access_method, self.access_location)) + + @property + def access_method(self) -> ObjectIdentifier: + return self._access_method + + @property + def access_location(self) -> GeneralName: + return self._access_location + + +class BasicConstraints(ExtensionType): + oid = ExtensionOID.BASIC_CONSTRAINTS + + def __init__(self, ca: bool, path_length: int | None) -> None: + if not isinstance(ca, bool): + raise TypeError("ca must be a boolean value") + + if path_length is not None and not ca: + raise ValueError("path_length must be None when ca is False") + + if path_length is not None and ( + not isinstance(path_length, int) or path_length < 0 + ): + raise TypeError( + "path_length must be a non-negative integer or None" + ) + + self._ca = ca + self._path_length = path_length + + @property + def ca(self) -> bool: + return self._ca + + @property + def path_length(self) -> int | None: + return self._path_length + + def __repr__(self) -> str: + return ( + f"" + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, BasicConstraints): + return NotImplemented + + return self.ca == other.ca and self.path_length == other.path_length + + def __hash__(self) -> int: + return hash((self.ca, self.path_length)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class DeltaCRLIndicator(ExtensionType): + oid = ExtensionOID.DELTA_CRL_INDICATOR + + def __init__(self, crl_number: int) -> None: + if not isinstance(crl_number, int): + raise TypeError("crl_number must be an integer") + + self._crl_number = crl_number + + @property + def crl_number(self) -> int: + return self._crl_number + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DeltaCRLIndicator): + return NotImplemented + + return self.crl_number == other.crl_number + + def __hash__(self) -> int: + return hash(self.crl_number) + + def __repr__(self) -> str: + return f"" + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class CRLDistributionPoints(ExtensionType): + oid = ExtensionOID.CRL_DISTRIBUTION_POINTS + + def __init__( + self, distribution_points: typing.Iterable[DistributionPoint] + ) -> None: + distribution_points = list(distribution_points) + if not all( + isinstance(x, DistributionPoint) for x in distribution_points + ): + raise TypeError( + "distribution_points must be a list of DistributionPoint " + "objects" + ) + + self._distribution_points = distribution_points + + __len__, __iter__, __getitem__ = _make_sequence_methods( + "_distribution_points" + ) + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CRLDistributionPoints): + return NotImplemented + + return self._distribution_points == other._distribution_points + + def __hash__(self) -> int: + return hash(tuple(self._distribution_points)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class FreshestCRL(ExtensionType): + oid = ExtensionOID.FRESHEST_CRL + + def __init__( + self, distribution_points: typing.Iterable[DistributionPoint] + ) -> None: + distribution_points = list(distribution_points) + if not all( + isinstance(x, DistributionPoint) for x in distribution_points + ): + raise TypeError( + "distribution_points must be a list of DistributionPoint " + "objects" + ) + + self._distribution_points = distribution_points + + __len__, __iter__, __getitem__ = _make_sequence_methods( + "_distribution_points" + ) + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, FreshestCRL): + return NotImplemented + + return self._distribution_points == other._distribution_points + + def __hash__(self) -> int: + return hash(tuple(self._distribution_points)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class DistributionPoint: + def __init__( + self, + full_name: typing.Iterable[GeneralName] | None, + relative_name: RelativeDistinguishedName | None, + reasons: frozenset[ReasonFlags] | None, + crl_issuer: typing.Iterable[GeneralName] | None, + ) -> None: + if full_name and relative_name: + raise ValueError( + "You cannot provide both full_name and relative_name, at " + "least one must be None." + ) + if not full_name and not relative_name and not crl_issuer: + raise ValueError( + "Either full_name, relative_name or crl_issuer must be " + "provided." + ) + + if full_name is not None: + full_name = list(full_name) + if not all(isinstance(x, GeneralName) for x in full_name): + raise TypeError( + "full_name must be a list of GeneralName objects" + ) + + if relative_name: + if not isinstance(relative_name, RelativeDistinguishedName): + raise TypeError( + "relative_name must be a RelativeDistinguishedName" + ) + + if crl_issuer is not None: + crl_issuer = list(crl_issuer) + if not all(isinstance(x, GeneralName) for x in crl_issuer): + raise TypeError( + "crl_issuer must be None or a list of general names" + ) + + if reasons and ( + not isinstance(reasons, frozenset) + or not all(isinstance(x, ReasonFlags) for x in reasons) + ): + raise TypeError("reasons must be None or frozenset of ReasonFlags") + + if reasons and ( + ReasonFlags.unspecified in reasons + or ReasonFlags.remove_from_crl in reasons + ): + raise ValueError( + "unspecified and remove_from_crl are not valid reasons in a " + "DistributionPoint" + ) + + self._full_name = full_name + self._relative_name = relative_name + self._reasons = reasons + self._crl_issuer = crl_issuer + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DistributionPoint): + return NotImplemented + + return ( + self.full_name == other.full_name + and self.relative_name == other.relative_name + and self.reasons == other.reasons + and self.crl_issuer == other.crl_issuer + ) + + def __hash__(self) -> int: + if self.full_name is not None: + fn: tuple[GeneralName, ...] | None = tuple(self.full_name) + else: + fn = None + + if self.crl_issuer is not None: + crl_issuer: tuple[GeneralName, ...] | None = tuple(self.crl_issuer) + else: + crl_issuer = None + + return hash((fn, self.relative_name, self.reasons, crl_issuer)) + + @property + def full_name(self) -> list[GeneralName] | None: + return self._full_name + + @property + def relative_name(self) -> RelativeDistinguishedName | None: + return self._relative_name + + @property + def reasons(self) -> frozenset[ReasonFlags] | None: + return self._reasons + + @property + def crl_issuer(self) -> list[GeneralName] | None: + return self._crl_issuer + + +class ReasonFlags(utils.Enum): + unspecified = "unspecified" + key_compromise = "keyCompromise" + ca_compromise = "cACompromise" + affiliation_changed = "affiliationChanged" + superseded = "superseded" + cessation_of_operation = "cessationOfOperation" + certificate_hold = "certificateHold" + privilege_withdrawn = "privilegeWithdrawn" + aa_compromise = "aACompromise" + remove_from_crl = "removeFromCRL" + + +# These are distribution point bit string mappings. Not to be confused with +# CRLReason reason flags bit string mappings. +# ReasonFlags ::= BIT STRING { +# unused (0), +# keyCompromise (1), +# cACompromise (2), +# affiliationChanged (3), +# superseded (4), +# cessationOfOperation (5), +# certificateHold (6), +# privilegeWithdrawn (7), +# aACompromise (8) } +_REASON_BIT_MAPPING = { + 1: ReasonFlags.key_compromise, + 2: ReasonFlags.ca_compromise, + 3: ReasonFlags.affiliation_changed, + 4: ReasonFlags.superseded, + 5: ReasonFlags.cessation_of_operation, + 6: ReasonFlags.certificate_hold, + 7: ReasonFlags.privilege_withdrawn, + 8: ReasonFlags.aa_compromise, +} + +_CRLREASONFLAGS = { + ReasonFlags.key_compromise: 1, + ReasonFlags.ca_compromise: 2, + ReasonFlags.affiliation_changed: 3, + ReasonFlags.superseded: 4, + ReasonFlags.cessation_of_operation: 5, + ReasonFlags.certificate_hold: 6, + ReasonFlags.privilege_withdrawn: 7, + ReasonFlags.aa_compromise: 8, +} + +# CRLReason ::= ENUMERATED { +# unspecified (0), +# keyCompromise (1), +# cACompromise (2), +# affiliationChanged (3), +# superseded (4), +# cessationOfOperation (5), +# certificateHold (6), +# -- value 7 is not used +# removeFromCRL (8), +# privilegeWithdrawn (9), +# aACompromise (10) } +_CRL_ENTRY_REASON_ENUM_TO_CODE = { + ReasonFlags.unspecified: 0, + ReasonFlags.key_compromise: 1, + ReasonFlags.ca_compromise: 2, + ReasonFlags.affiliation_changed: 3, + ReasonFlags.superseded: 4, + ReasonFlags.cessation_of_operation: 5, + ReasonFlags.certificate_hold: 6, + ReasonFlags.remove_from_crl: 8, + ReasonFlags.privilege_withdrawn: 9, + ReasonFlags.aa_compromise: 10, +} + + +class PolicyConstraints(ExtensionType): + oid = ExtensionOID.POLICY_CONSTRAINTS + + def __init__( + self, + require_explicit_policy: int | None, + inhibit_policy_mapping: int | None, + ) -> None: + if require_explicit_policy is not None and not isinstance( + require_explicit_policy, int + ): + raise TypeError( + "require_explicit_policy must be a non-negative integer or " + "None" + ) + + if inhibit_policy_mapping is not None and not isinstance( + inhibit_policy_mapping, int + ): + raise TypeError( + "inhibit_policy_mapping must be a non-negative integer or None" + ) + + if inhibit_policy_mapping is None and require_explicit_policy is None: + raise ValueError( + "At least one of require_explicit_policy and " + "inhibit_policy_mapping must not be None" + ) + + self._require_explicit_policy = require_explicit_policy + self._inhibit_policy_mapping = inhibit_policy_mapping + + def __repr__(self) -> str: + return ( + "".format(self) + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, PolicyConstraints): + return NotImplemented + + return ( + self.require_explicit_policy == other.require_explicit_policy + and self.inhibit_policy_mapping == other.inhibit_policy_mapping + ) + + def __hash__(self) -> int: + return hash( + (self.require_explicit_policy, self.inhibit_policy_mapping) + ) + + @property + def require_explicit_policy(self) -> int | None: + return self._require_explicit_policy + + @property + def inhibit_policy_mapping(self) -> int | None: + return self._inhibit_policy_mapping + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class CertificatePolicies(ExtensionType): + oid = ExtensionOID.CERTIFICATE_POLICIES + + def __init__(self, policies: typing.Iterable[PolicyInformation]) -> None: + policies = list(policies) + if not all(isinstance(x, PolicyInformation) for x in policies): + raise TypeError( + "Every item in the policies list must be a " + "PolicyInformation" + ) + + self._policies = policies + + __len__, __iter__, __getitem__ = _make_sequence_methods("_policies") + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CertificatePolicies): + return NotImplemented + + return self._policies == other._policies + + def __hash__(self) -> int: + return hash(tuple(self._policies)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class PolicyInformation: + def __init__( + self, + policy_identifier: ObjectIdentifier, + policy_qualifiers: typing.Iterable[str | UserNotice] | None, + ) -> None: + if not isinstance(policy_identifier, ObjectIdentifier): + raise TypeError("policy_identifier must be an ObjectIdentifier") + + self._policy_identifier = policy_identifier + + if policy_qualifiers is not None: + policy_qualifiers = list(policy_qualifiers) + if not all( + isinstance(x, (str, UserNotice)) for x in policy_qualifiers + ): + raise TypeError( + "policy_qualifiers must be a list of strings and/or " + "UserNotice objects or None" + ) + + self._policy_qualifiers = policy_qualifiers + + def __repr__(self) -> str: + return ( + f"" + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, PolicyInformation): + return NotImplemented + + return ( + self.policy_identifier == other.policy_identifier + and self.policy_qualifiers == other.policy_qualifiers + ) + + def __hash__(self) -> int: + if self.policy_qualifiers is not None: + pq: tuple[str | UserNotice, ...] | None = tuple( + self.policy_qualifiers + ) + else: + pq = None + + return hash((self.policy_identifier, pq)) + + @property + def policy_identifier(self) -> ObjectIdentifier: + return self._policy_identifier + + @property + def policy_qualifiers( + self, + ) -> list[str | UserNotice] | None: + return self._policy_qualifiers + + +class UserNotice: + def __init__( + self, + notice_reference: NoticeReference | None, + explicit_text: str | None, + ) -> None: + if notice_reference and not isinstance( + notice_reference, NoticeReference + ): + raise TypeError( + "notice_reference must be None or a NoticeReference" + ) + + self._notice_reference = notice_reference + self._explicit_text = explicit_text + + def __repr__(self) -> str: + return ( + f"" + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, UserNotice): + return NotImplemented + + return ( + self.notice_reference == other.notice_reference + and self.explicit_text == other.explicit_text + ) + + def __hash__(self) -> int: + return hash((self.notice_reference, self.explicit_text)) + + @property + def notice_reference(self) -> NoticeReference | None: + return self._notice_reference + + @property + def explicit_text(self) -> str | None: + return self._explicit_text + + +class NoticeReference: + def __init__( + self, + organization: str | None, + notice_numbers: typing.Iterable[int], + ) -> None: + self._organization = organization + notice_numbers = list(notice_numbers) + if not all(isinstance(x, int) for x in notice_numbers): + raise TypeError("notice_numbers must be a list of integers") + + self._notice_numbers = notice_numbers + + def __repr__(self) -> str: + return ( + f"" + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, NoticeReference): + return NotImplemented + + return ( + self.organization == other.organization + and self.notice_numbers == other.notice_numbers + ) + + def __hash__(self) -> int: + return hash((self.organization, tuple(self.notice_numbers))) + + @property + def organization(self) -> str | None: + return self._organization + + @property + def notice_numbers(self) -> list[int]: + return self._notice_numbers + + +class ExtendedKeyUsage(ExtensionType): + oid = ExtensionOID.EXTENDED_KEY_USAGE + + def __init__(self, usages: typing.Iterable[ObjectIdentifier]) -> None: + usages = list(usages) + if not all(isinstance(x, ObjectIdentifier) for x in usages): + raise TypeError( + "Every item in the usages list must be an ObjectIdentifier" + ) + + self._usages = usages + + __len__, __iter__, __getitem__ = _make_sequence_methods("_usages") + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, ExtendedKeyUsage): + return NotImplemented + + return self._usages == other._usages + + def __hash__(self) -> int: + return hash(tuple(self._usages)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class OCSPNoCheck(ExtensionType): + oid = ExtensionOID.OCSP_NO_CHECK + + def __eq__(self, other: object) -> bool: + if not isinstance(other, OCSPNoCheck): + return NotImplemented + + return True + + def __hash__(self) -> int: + return hash(OCSPNoCheck) + + def __repr__(self) -> str: + return "" + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class PrecertPoison(ExtensionType): + oid = ExtensionOID.PRECERT_POISON + + def __eq__(self, other: object) -> bool: + if not isinstance(other, PrecertPoison): + return NotImplemented + + return True + + def __hash__(self) -> int: + return hash(PrecertPoison) + + def __repr__(self) -> str: + return "" + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class TLSFeature(ExtensionType): + oid = ExtensionOID.TLS_FEATURE + + def __init__(self, features: typing.Iterable[TLSFeatureType]) -> None: + features = list(features) + if ( + not all(isinstance(x, TLSFeatureType) for x in features) + or len(features) == 0 + ): + raise TypeError( + "features must be a list of elements from the TLSFeatureType " + "enum" + ) + + self._features = features + + __len__, __iter__, __getitem__ = _make_sequence_methods("_features") + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, TLSFeature): + return NotImplemented + + return self._features == other._features + + def __hash__(self) -> int: + return hash(tuple(self._features)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class TLSFeatureType(utils.Enum): + # status_request is defined in RFC 6066 and is used for what is commonly + # called OCSP Must-Staple when present in the TLS Feature extension in an + # X.509 certificate. + status_request = 5 + # status_request_v2 is defined in RFC 6961 and allows multiple OCSP + # responses to be provided. It is not currently in use by clients or + # servers. + status_request_v2 = 17 + + +_TLS_FEATURE_TYPE_TO_ENUM = {x.value: x for x in TLSFeatureType} + + +class InhibitAnyPolicy(ExtensionType): + oid = ExtensionOID.INHIBIT_ANY_POLICY + + def __init__(self, skip_certs: int) -> None: + if not isinstance(skip_certs, int): + raise TypeError("skip_certs must be an integer") + + if skip_certs < 0: + raise ValueError("skip_certs must be a non-negative integer") + + self._skip_certs = skip_certs + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, InhibitAnyPolicy): + return NotImplemented + + return self.skip_certs == other.skip_certs + + def __hash__(self) -> int: + return hash(self.skip_certs) + + @property + def skip_certs(self) -> int: + return self._skip_certs + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class KeyUsage(ExtensionType): + oid = ExtensionOID.KEY_USAGE + + def __init__( + self, + digital_signature: bool, + content_commitment: bool, + key_encipherment: bool, + data_encipherment: bool, + key_agreement: bool, + key_cert_sign: bool, + crl_sign: bool, + encipher_only: bool, + decipher_only: bool, + ) -> None: + if not key_agreement and (encipher_only or decipher_only): + raise ValueError( + "encipher_only and decipher_only can only be true when " + "key_agreement is true" + ) + + self._digital_signature = digital_signature + self._content_commitment = content_commitment + self._key_encipherment = key_encipherment + self._data_encipherment = data_encipherment + self._key_agreement = key_agreement + self._key_cert_sign = key_cert_sign + self._crl_sign = crl_sign + self._encipher_only = encipher_only + self._decipher_only = decipher_only + + @property + def digital_signature(self) -> bool: + return self._digital_signature + + @property + def content_commitment(self) -> bool: + return self._content_commitment + + @property + def key_encipherment(self) -> bool: + return self._key_encipherment + + @property + def data_encipherment(self) -> bool: + return self._data_encipherment + + @property + def key_agreement(self) -> bool: + return self._key_agreement + + @property + def key_cert_sign(self) -> bool: + return self._key_cert_sign + + @property + def crl_sign(self) -> bool: + return self._crl_sign + + @property + def encipher_only(self) -> bool: + if not self.key_agreement: + raise ValueError( + "encipher_only is undefined unless key_agreement is true" + ) + else: + return self._encipher_only + + @property + def decipher_only(self) -> bool: + if not self.key_agreement: + raise ValueError( + "decipher_only is undefined unless key_agreement is true" + ) + else: + return self._decipher_only + + def __repr__(self) -> str: + try: + encipher_only = self.encipher_only + decipher_only = self.decipher_only + except ValueError: + # Users found None confusing because even though encipher/decipher + # have no meaning unless key_agreement is true, to construct an + # instance of the class you still need to pass False. + encipher_only = False + decipher_only = False + + return ( + f"" + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, KeyUsage): + return NotImplemented + + return ( + self.digital_signature == other.digital_signature + and self.content_commitment == other.content_commitment + and self.key_encipherment == other.key_encipherment + and self.data_encipherment == other.data_encipherment + and self.key_agreement == other.key_agreement + and self.key_cert_sign == other.key_cert_sign + and self.crl_sign == other.crl_sign + and self._encipher_only == other._encipher_only + and self._decipher_only == other._decipher_only + ) + + def __hash__(self) -> int: + return hash( + ( + self.digital_signature, + self.content_commitment, + self.key_encipherment, + self.data_encipherment, + self.key_agreement, + self.key_cert_sign, + self.crl_sign, + self._encipher_only, + self._decipher_only, + ) + ) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class NameConstraints(ExtensionType): + oid = ExtensionOID.NAME_CONSTRAINTS + + def __init__( + self, + permitted_subtrees: typing.Iterable[GeneralName] | None, + excluded_subtrees: typing.Iterable[GeneralName] | None, + ) -> None: + if permitted_subtrees is not None: + permitted_subtrees = list(permitted_subtrees) + if not permitted_subtrees: + raise ValueError( + "permitted_subtrees must be a non-empty list or None" + ) + if not all(isinstance(x, GeneralName) for x in permitted_subtrees): + raise TypeError( + "permitted_subtrees must be a list of GeneralName objects " + "or None" + ) + + self._validate_tree(permitted_subtrees) + + if excluded_subtrees is not None: + excluded_subtrees = list(excluded_subtrees) + if not excluded_subtrees: + raise ValueError( + "excluded_subtrees must be a non-empty list or None" + ) + if not all(isinstance(x, GeneralName) for x in excluded_subtrees): + raise TypeError( + "excluded_subtrees must be a list of GeneralName objects " + "or None" + ) + + self._validate_tree(excluded_subtrees) + + if permitted_subtrees is None and excluded_subtrees is None: + raise ValueError( + "At least one of permitted_subtrees and excluded_subtrees " + "must not be None" + ) + + self._permitted_subtrees = permitted_subtrees + self._excluded_subtrees = excluded_subtrees + + def __eq__(self, other: object) -> bool: + if not isinstance(other, NameConstraints): + return NotImplemented + + return ( + self.excluded_subtrees == other.excluded_subtrees + and self.permitted_subtrees == other.permitted_subtrees + ) + + def _validate_tree(self, tree: typing.Iterable[GeneralName]) -> None: + self._validate_ip_name(tree) + self._validate_dns_name(tree) + + def _validate_ip_name(self, tree: typing.Iterable[GeneralName]) -> None: + if any( + isinstance(name, IPAddress) + and not isinstance( + name.value, (ipaddress.IPv4Network, ipaddress.IPv6Network) + ) + for name in tree + ): + raise TypeError( + "IPAddress name constraints must be an IPv4Network or" + " IPv6Network object" + ) + + def _validate_dns_name(self, tree: typing.Iterable[GeneralName]) -> None: + if any( + isinstance(name, DNSName) and "*" in name.value for name in tree + ): + raise ValueError( + "DNSName name constraints must not contain the '*' wildcard" + " character" + ) + + def __repr__(self) -> str: + return ( + f"" + ) + + def __hash__(self) -> int: + if self.permitted_subtrees is not None: + ps: tuple[GeneralName, ...] | None = tuple(self.permitted_subtrees) + else: + ps = None + + if self.excluded_subtrees is not None: + es: tuple[GeneralName, ...] | None = tuple(self.excluded_subtrees) + else: + es = None + + return hash((ps, es)) + + @property + def permitted_subtrees( + self, + ) -> list[GeneralName] | None: + return self._permitted_subtrees + + @property + def excluded_subtrees( + self, + ) -> list[GeneralName] | None: + return self._excluded_subtrees + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class Extension(typing.Generic[ExtensionTypeVar]): + def __init__( + self, oid: ObjectIdentifier, critical: bool, value: ExtensionTypeVar + ) -> None: + if not isinstance(oid, ObjectIdentifier): + raise TypeError( + "oid argument must be an ObjectIdentifier instance." + ) + + if not isinstance(critical, bool): + raise TypeError("critical must be a boolean value") + + self._oid = oid + self._critical = critical + self._value = value + + @property + def oid(self) -> ObjectIdentifier: + return self._oid + + @property + def critical(self) -> bool: + return self._critical + + @property + def value(self) -> ExtensionTypeVar: + return self._value + + def __repr__(self) -> str: + return ( + f"" + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, Extension): + return NotImplemented + + return ( + self.oid == other.oid + and self.critical == other.critical + and self.value == other.value + ) + + def __hash__(self) -> int: + return hash((self.oid, self.critical, self.value)) + + +class GeneralNames: + def __init__(self, general_names: typing.Iterable[GeneralName]) -> None: + general_names = list(general_names) + if not all(isinstance(x, GeneralName) for x in general_names): + raise TypeError( + "Every item in the general_names list must be an " + "object conforming to the GeneralName interface" + ) + + self._general_names = general_names + + __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names") + + @typing.overload + def get_values_for_type( + self, + type: type[DNSName] + | type[UniformResourceIdentifier] + | type[RFC822Name], + ) -> list[str]: ... + + @typing.overload + def get_values_for_type( + self, + type: type[DirectoryName], + ) -> list[Name]: ... + + @typing.overload + def get_values_for_type( + self, + type: type[RegisteredID], + ) -> list[ObjectIdentifier]: ... + + @typing.overload + def get_values_for_type( + self, type: type[IPAddress] + ) -> list[_IPAddressTypes]: ... + + @typing.overload + def get_values_for_type( + self, type: type[OtherName] + ) -> list[OtherName]: ... + + def get_values_for_type( + self, + type: type[DNSName] + | type[DirectoryName] + | type[IPAddress] + | type[OtherName] + | type[RFC822Name] + | type[RegisteredID] + | type[UniformResourceIdentifier], + ) -> ( + list[_IPAddressTypes] + | list[str] + | list[OtherName] + | list[Name] + | list[ObjectIdentifier] + ): + # Return the value of each GeneralName, except for OtherName instances + # which we return directly because it has two important properties not + # just one value. + objs = (i for i in self if isinstance(i, type)) + if type != OtherName: + return [i.value for i in objs] + return list(objs) + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, GeneralNames): + return NotImplemented + + return self._general_names == other._general_names + + def __hash__(self) -> int: + return hash(tuple(self._general_names)) + + +class SubjectAlternativeName(ExtensionType): + oid = ExtensionOID.SUBJECT_ALTERNATIVE_NAME + + def __init__(self, general_names: typing.Iterable[GeneralName]) -> None: + self._general_names = GeneralNames(general_names) + + __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names") + + @typing.overload + def get_values_for_type( + self, + type: type[DNSName] + | type[UniformResourceIdentifier] + | type[RFC822Name], + ) -> list[str]: ... + + @typing.overload + def get_values_for_type( + self, + type: type[DirectoryName], + ) -> list[Name]: ... + + @typing.overload + def get_values_for_type( + self, + type: type[RegisteredID], + ) -> list[ObjectIdentifier]: ... + + @typing.overload + def get_values_for_type( + self, type: type[IPAddress] + ) -> list[_IPAddressTypes]: ... + + @typing.overload + def get_values_for_type( + self, type: type[OtherName] + ) -> list[OtherName]: ... + + def get_values_for_type( + self, + type: type[DNSName] + | type[DirectoryName] + | type[IPAddress] + | type[OtherName] + | type[RFC822Name] + | type[RegisteredID] + | type[UniformResourceIdentifier], + ) -> ( + list[_IPAddressTypes] + | list[str] + | list[OtherName] + | list[Name] + | list[ObjectIdentifier] + ): + return self._general_names.get_values_for_type(type) + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, SubjectAlternativeName): + return NotImplemented + + return self._general_names == other._general_names + + def __hash__(self) -> int: + return hash(self._general_names) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class IssuerAlternativeName(ExtensionType): + oid = ExtensionOID.ISSUER_ALTERNATIVE_NAME + + def __init__(self, general_names: typing.Iterable[GeneralName]) -> None: + self._general_names = GeneralNames(general_names) + + __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names") + + @typing.overload + def get_values_for_type( + self, + type: type[DNSName] + | type[UniformResourceIdentifier] + | type[RFC822Name], + ) -> list[str]: ... + + @typing.overload + def get_values_for_type( + self, + type: type[DirectoryName], + ) -> list[Name]: ... + + @typing.overload + def get_values_for_type( + self, + type: type[RegisteredID], + ) -> list[ObjectIdentifier]: ... + + @typing.overload + def get_values_for_type( + self, type: type[IPAddress] + ) -> list[_IPAddressTypes]: ... + + @typing.overload + def get_values_for_type( + self, type: type[OtherName] + ) -> list[OtherName]: ... + + def get_values_for_type( + self, + type: type[DNSName] + | type[DirectoryName] + | type[IPAddress] + | type[OtherName] + | type[RFC822Name] + | type[RegisteredID] + | type[UniformResourceIdentifier], + ) -> ( + list[_IPAddressTypes] + | list[str] + | list[OtherName] + | list[Name] + | list[ObjectIdentifier] + ): + return self._general_names.get_values_for_type(type) + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, IssuerAlternativeName): + return NotImplemented + + return self._general_names == other._general_names + + def __hash__(self) -> int: + return hash(self._general_names) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class CertificateIssuer(ExtensionType): + oid = CRLEntryExtensionOID.CERTIFICATE_ISSUER + + def __init__(self, general_names: typing.Iterable[GeneralName]) -> None: + self._general_names = GeneralNames(general_names) + + __len__, __iter__, __getitem__ = _make_sequence_methods("_general_names") + + @typing.overload + def get_values_for_type( + self, + type: type[DNSName] + | type[UniformResourceIdentifier] + | type[RFC822Name], + ) -> list[str]: ... + + @typing.overload + def get_values_for_type( + self, + type: type[DirectoryName], + ) -> list[Name]: ... + + @typing.overload + def get_values_for_type( + self, + type: type[RegisteredID], + ) -> list[ObjectIdentifier]: ... + + @typing.overload + def get_values_for_type( + self, type: type[IPAddress] + ) -> list[_IPAddressTypes]: ... + + @typing.overload + def get_values_for_type( + self, type: type[OtherName] + ) -> list[OtherName]: ... + + def get_values_for_type( + self, + type: type[DNSName] + | type[DirectoryName] + | type[IPAddress] + | type[OtherName] + | type[RFC822Name] + | type[RegisteredID] + | type[UniformResourceIdentifier], + ) -> ( + list[_IPAddressTypes] + | list[str] + | list[OtherName] + | list[Name] + | list[ObjectIdentifier] + ): + return self._general_names.get_values_for_type(type) + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CertificateIssuer): + return NotImplemented + + return self._general_names == other._general_names + + def __hash__(self) -> int: + return hash(self._general_names) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class CRLReason(ExtensionType): + oid = CRLEntryExtensionOID.CRL_REASON + + def __init__(self, reason: ReasonFlags) -> None: + if not isinstance(reason, ReasonFlags): + raise TypeError("reason must be an element from ReasonFlags") + + self._reason = reason + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, CRLReason): + return NotImplemented + + return self.reason == other.reason + + def __hash__(self) -> int: + return hash(self.reason) + + @property + def reason(self) -> ReasonFlags: + return self._reason + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class InvalidityDate(ExtensionType): + oid = CRLEntryExtensionOID.INVALIDITY_DATE + + def __init__(self, invalidity_date: datetime.datetime) -> None: + if not isinstance(invalidity_date, datetime.datetime): + raise TypeError("invalidity_date must be a datetime.datetime") + + self._invalidity_date = invalidity_date + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, InvalidityDate): + return NotImplemented + + return self.invalidity_date == other.invalidity_date + + def __hash__(self) -> int: + return hash(self.invalidity_date) + + @property + def invalidity_date(self) -> datetime.datetime: + return self._invalidity_date + + @property + def invalidity_date_utc(self) -> datetime.datetime: + if self._invalidity_date.tzinfo is None: + return self._invalidity_date.replace(tzinfo=datetime.timezone.utc) + else: + return self._invalidity_date.astimezone(tz=datetime.timezone.utc) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class PrecertificateSignedCertificateTimestamps(ExtensionType): + oid = ExtensionOID.PRECERT_SIGNED_CERTIFICATE_TIMESTAMPS + + def __init__( + self, + signed_certificate_timestamps: typing.Iterable[ + SignedCertificateTimestamp + ], + ) -> None: + signed_certificate_timestamps = list(signed_certificate_timestamps) + if not all( + isinstance(sct, SignedCertificateTimestamp) + for sct in signed_certificate_timestamps + ): + raise TypeError( + "Every item in the signed_certificate_timestamps list must be " + "a SignedCertificateTimestamp" + ) + self._signed_certificate_timestamps = signed_certificate_timestamps + + __len__, __iter__, __getitem__ = _make_sequence_methods( + "_signed_certificate_timestamps" + ) + + def __repr__(self) -> str: + return f"" + + def __hash__(self) -> int: + return hash(tuple(self._signed_certificate_timestamps)) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, PrecertificateSignedCertificateTimestamps): + return NotImplemented + + return ( + self._signed_certificate_timestamps + == other._signed_certificate_timestamps + ) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class SignedCertificateTimestamps(ExtensionType): + oid = ExtensionOID.SIGNED_CERTIFICATE_TIMESTAMPS + + def __init__( + self, + signed_certificate_timestamps: typing.Iterable[ + SignedCertificateTimestamp + ], + ) -> None: + signed_certificate_timestamps = list(signed_certificate_timestamps) + if not all( + isinstance(sct, SignedCertificateTimestamp) + for sct in signed_certificate_timestamps + ): + raise TypeError( + "Every item in the signed_certificate_timestamps list must be " + "a SignedCertificateTimestamp" + ) + self._signed_certificate_timestamps = signed_certificate_timestamps + + __len__, __iter__, __getitem__ = _make_sequence_methods( + "_signed_certificate_timestamps" + ) + + def __repr__(self) -> str: + return f"" + + def __hash__(self) -> int: + return hash(tuple(self._signed_certificate_timestamps)) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, SignedCertificateTimestamps): + return NotImplemented + + return ( + self._signed_certificate_timestamps + == other._signed_certificate_timestamps + ) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class OCSPNonce(ExtensionType): + oid = OCSPExtensionOID.NONCE + + def __init__(self, nonce: bytes) -> None: + if not isinstance(nonce, bytes): + raise TypeError("nonce must be bytes") + + self._nonce = nonce + + def __eq__(self, other: object) -> bool: + if not isinstance(other, OCSPNonce): + return NotImplemented + + return self.nonce == other.nonce + + def __hash__(self) -> int: + return hash(self.nonce) + + def __repr__(self) -> str: + return f"" + + @property + def nonce(self) -> bytes: + return self._nonce + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class OCSPAcceptableResponses(ExtensionType): + oid = OCSPExtensionOID.ACCEPTABLE_RESPONSES + + def __init__(self, responses: typing.Iterable[ObjectIdentifier]) -> None: + responses = list(responses) + if any(not isinstance(r, ObjectIdentifier) for r in responses): + raise TypeError("All responses must be ObjectIdentifiers") + + self._responses = responses + + def __eq__(self, other: object) -> bool: + if not isinstance(other, OCSPAcceptableResponses): + return NotImplemented + + return self._responses == other._responses + + def __hash__(self) -> int: + return hash(tuple(self._responses)) + + def __repr__(self) -> str: + return f"" + + def __iter__(self) -> typing.Iterator[ObjectIdentifier]: + return iter(self._responses) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class IssuingDistributionPoint(ExtensionType): + oid = ExtensionOID.ISSUING_DISTRIBUTION_POINT + + def __init__( + self, + full_name: typing.Iterable[GeneralName] | None, + relative_name: RelativeDistinguishedName | None, + only_contains_user_certs: bool, + only_contains_ca_certs: bool, + only_some_reasons: frozenset[ReasonFlags] | None, + indirect_crl: bool, + only_contains_attribute_certs: bool, + ) -> None: + if full_name is not None: + full_name = list(full_name) + + if only_some_reasons and ( + not isinstance(only_some_reasons, frozenset) + or not all(isinstance(x, ReasonFlags) for x in only_some_reasons) + ): + raise TypeError( + "only_some_reasons must be None or frozenset of ReasonFlags" + ) + + if only_some_reasons and ( + ReasonFlags.unspecified in only_some_reasons + or ReasonFlags.remove_from_crl in only_some_reasons + ): + raise ValueError( + "unspecified and remove_from_crl are not valid reasons in an " + "IssuingDistributionPoint" + ) + + if not ( + isinstance(only_contains_user_certs, bool) + and isinstance(only_contains_ca_certs, bool) + and isinstance(indirect_crl, bool) + and isinstance(only_contains_attribute_certs, bool) + ): + raise TypeError( + "only_contains_user_certs, only_contains_ca_certs, " + "indirect_crl and only_contains_attribute_certs " + "must all be boolean." + ) + + crl_constraints = [ + only_contains_user_certs, + only_contains_ca_certs, + indirect_crl, + only_contains_attribute_certs, + ] + + if len([x for x in crl_constraints if x]) > 1: + raise ValueError( + "Only one of the following can be set to True: " + "only_contains_user_certs, only_contains_ca_certs, " + "indirect_crl, only_contains_attribute_certs" + ) + + if not any( + [ + only_contains_user_certs, + only_contains_ca_certs, + indirect_crl, + only_contains_attribute_certs, + full_name, + relative_name, + only_some_reasons, + ] + ): + raise ValueError( + "Cannot create empty extension: " + "if only_contains_user_certs, only_contains_ca_certs, " + "indirect_crl, and only_contains_attribute_certs are all False" + ", then either full_name, relative_name, or only_some_reasons " + "must have a value." + ) + + self._only_contains_user_certs = only_contains_user_certs + self._only_contains_ca_certs = only_contains_ca_certs + self._indirect_crl = indirect_crl + self._only_contains_attribute_certs = only_contains_attribute_certs + self._only_some_reasons = only_some_reasons + self._full_name = full_name + self._relative_name = relative_name + + def __repr__(self) -> str: + return ( + f"" + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, IssuingDistributionPoint): + return NotImplemented + + return ( + self.full_name == other.full_name + and self.relative_name == other.relative_name + and self.only_contains_user_certs == other.only_contains_user_certs + and self.only_contains_ca_certs == other.only_contains_ca_certs + and self.only_some_reasons == other.only_some_reasons + and self.indirect_crl == other.indirect_crl + and self.only_contains_attribute_certs + == other.only_contains_attribute_certs + ) + + def __hash__(self) -> int: + return hash( + ( + self.full_name, + self.relative_name, + self.only_contains_user_certs, + self.only_contains_ca_certs, + self.only_some_reasons, + self.indirect_crl, + self.only_contains_attribute_certs, + ) + ) + + @property + def full_name(self) -> list[GeneralName] | None: + return self._full_name + + @property + def relative_name(self) -> RelativeDistinguishedName | None: + return self._relative_name + + @property + def only_contains_user_certs(self) -> bool: + return self._only_contains_user_certs + + @property + def only_contains_ca_certs(self) -> bool: + return self._only_contains_ca_certs + + @property + def only_some_reasons( + self, + ) -> frozenset[ReasonFlags] | None: + return self._only_some_reasons + + @property + def indirect_crl(self) -> bool: + return self._indirect_crl + + @property + def only_contains_attribute_certs(self) -> bool: + return self._only_contains_attribute_certs + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class MSCertificateTemplate(ExtensionType): + oid = ExtensionOID.MS_CERTIFICATE_TEMPLATE + + def __init__( + self, + template_id: ObjectIdentifier, + major_version: int | None, + minor_version: int | None, + ) -> None: + if not isinstance(template_id, ObjectIdentifier): + raise TypeError("oid must be an ObjectIdentifier") + self._template_id = template_id + if ( + major_version is not None and not isinstance(major_version, int) + ) or ( + minor_version is not None and not isinstance(minor_version, int) + ): + raise TypeError( + "major_version and minor_version must be integers or None" + ) + self._major_version = major_version + self._minor_version = minor_version + + @property + def template_id(self) -> ObjectIdentifier: + return self._template_id + + @property + def major_version(self) -> int | None: + return self._major_version + + @property + def minor_version(self) -> int | None: + return self._minor_version + + def __repr__(self) -> str: + return ( + f"" + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, MSCertificateTemplate): + return NotImplemented + + return ( + self.template_id == other.template_id + and self.major_version == other.major_version + and self.minor_version == other.minor_version + ) + + def __hash__(self) -> int: + return hash((self.template_id, self.major_version, self.minor_version)) + + def public_bytes(self) -> bytes: + return rust_x509.encode_extension_value(self) + + +class UnrecognizedExtension(ExtensionType): + def __init__(self, oid: ObjectIdentifier, value: bytes) -> None: + if not isinstance(oid, ObjectIdentifier): + raise TypeError("oid must be an ObjectIdentifier") + self._oid = oid + self._value = value + + @property + def oid(self) -> ObjectIdentifier: # type: ignore[override] + return self._oid + + @property + def value(self) -> bytes: + return self._value + + def __repr__(self) -> str: + return ( + f"" + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, UnrecognizedExtension): + return NotImplemented + + return self.oid == other.oid and self.value == other.value + + def __hash__(self) -> int: + return hash((self.oid, self.value)) + + def public_bytes(self) -> bytes: + return self.value diff --git a/templates/skills/file_manager/dependencies/cryptography/x509/general_name.py b/templates/skills/file_manager/dependencies/cryptography/x509/general_name.py new file mode 100644 index 00000000..672f2875 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/x509/general_name.py @@ -0,0 +1,281 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc +import ipaddress +import typing +from email.utils import parseaddr + +from cryptography.x509.name import Name +from cryptography.x509.oid import ObjectIdentifier + +_IPAddressTypes = typing.Union[ + ipaddress.IPv4Address, + ipaddress.IPv6Address, + ipaddress.IPv4Network, + ipaddress.IPv6Network, +] + + +class UnsupportedGeneralNameType(Exception): + pass + + +class GeneralName(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def value(self) -> typing.Any: + """ + Return the value of the object + """ + + +class RFC822Name(GeneralName): + def __init__(self, value: str) -> None: + if isinstance(value, str): + try: + value.encode("ascii") + except UnicodeEncodeError: + raise ValueError( + "RFC822Name values should be passed as an A-label string. " + "This means unicode characters should be encoded via " + "a library like idna." + ) + else: + raise TypeError("value must be string") + + name, address = parseaddr(value) + if name or not address: + # parseaddr has found a name (e.g. Name ) or the entire + # value is an empty string. + raise ValueError("Invalid rfc822name value") + + self._value = value + + @property + def value(self) -> str: + return self._value + + @classmethod + def _init_without_validation(cls, value: str) -> RFC822Name: + instance = cls.__new__(cls) + instance._value = value + return instance + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, RFC822Name): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class DNSName(GeneralName): + def __init__(self, value: str) -> None: + if isinstance(value, str): + try: + value.encode("ascii") + except UnicodeEncodeError: + raise ValueError( + "DNSName values should be passed as an A-label string. " + "This means unicode characters should be encoded via " + "a library like idna." + ) + else: + raise TypeError("value must be string") + + self._value = value + + @property + def value(self) -> str: + return self._value + + @classmethod + def _init_without_validation(cls, value: str) -> DNSName: + instance = cls.__new__(cls) + instance._value = value + return instance + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DNSName): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class UniformResourceIdentifier(GeneralName): + def __init__(self, value: str) -> None: + if isinstance(value, str): + try: + value.encode("ascii") + except UnicodeEncodeError: + raise ValueError( + "URI values should be passed as an A-label string. " + "This means unicode characters should be encoded via " + "a library like idna." + ) + else: + raise TypeError("value must be string") + + self._value = value + + @property + def value(self) -> str: + return self._value + + @classmethod + def _init_without_validation(cls, value: str) -> UniformResourceIdentifier: + instance = cls.__new__(cls) + instance._value = value + return instance + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, UniformResourceIdentifier): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class DirectoryName(GeneralName): + def __init__(self, value: Name) -> None: + if not isinstance(value, Name): + raise TypeError("value must be a Name") + + self._value = value + + @property + def value(self) -> Name: + return self._value + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, DirectoryName): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class RegisteredID(GeneralName): + def __init__(self, value: ObjectIdentifier) -> None: + if not isinstance(value, ObjectIdentifier): + raise TypeError("value must be an ObjectIdentifier") + + self._value = value + + @property + def value(self) -> ObjectIdentifier: + return self._value + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, RegisteredID): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class IPAddress(GeneralName): + def __init__(self, value: _IPAddressTypes) -> None: + if not isinstance( + value, + ( + ipaddress.IPv4Address, + ipaddress.IPv6Address, + ipaddress.IPv4Network, + ipaddress.IPv6Network, + ), + ): + raise TypeError( + "value must be an instance of ipaddress.IPv4Address, " + "ipaddress.IPv6Address, ipaddress.IPv4Network, or " + "ipaddress.IPv6Network" + ) + + self._value = value + + @property + def value(self) -> _IPAddressTypes: + return self._value + + def _packed(self) -> bytes: + if isinstance( + self.value, (ipaddress.IPv4Address, ipaddress.IPv6Address) + ): + return self.value.packed + else: + return ( + self.value.network_address.packed + self.value.netmask.packed + ) + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, IPAddress): + return NotImplemented + + return self.value == other.value + + def __hash__(self) -> int: + return hash(self.value) + + +class OtherName(GeneralName): + def __init__(self, type_id: ObjectIdentifier, value: bytes) -> None: + if not isinstance(type_id, ObjectIdentifier): + raise TypeError("type_id must be an ObjectIdentifier") + if not isinstance(value, bytes): + raise TypeError("value must be a binary string") + + self._type_id = type_id + self._value = value + + @property + def type_id(self) -> ObjectIdentifier: + return self._type_id + + @property + def value(self) -> bytes: + return self._value + + def __repr__(self) -> str: + return f"" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, OtherName): + return NotImplemented + + return self.type_id == other.type_id and self.value == other.value + + def __hash__(self) -> int: + return hash((self.type_id, self.value)) diff --git a/templates/skills/file_manager/dependencies/cryptography/x509/name.py b/templates/skills/file_manager/dependencies/cryptography/x509/name.py new file mode 100644 index 00000000..1b6b89d1 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/x509/name.py @@ -0,0 +1,465 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import binascii +import re +import sys +import typing +import warnings + +from cryptography import utils +from cryptography.hazmat.bindings._rust import x509 as rust_x509 +from cryptography.x509.oid import NameOID, ObjectIdentifier + + +class _ASN1Type(utils.Enum): + BitString = 3 + OctetString = 4 + UTF8String = 12 + NumericString = 18 + PrintableString = 19 + T61String = 20 + IA5String = 22 + UTCTime = 23 + GeneralizedTime = 24 + VisibleString = 26 + UniversalString = 28 + BMPString = 30 + + +_ASN1_TYPE_TO_ENUM = {i.value: i for i in _ASN1Type} +_NAMEOID_DEFAULT_TYPE: dict[ObjectIdentifier, _ASN1Type] = { + NameOID.COUNTRY_NAME: _ASN1Type.PrintableString, + NameOID.JURISDICTION_COUNTRY_NAME: _ASN1Type.PrintableString, + NameOID.SERIAL_NUMBER: _ASN1Type.PrintableString, + NameOID.DN_QUALIFIER: _ASN1Type.PrintableString, + NameOID.EMAIL_ADDRESS: _ASN1Type.IA5String, + NameOID.DOMAIN_COMPONENT: _ASN1Type.IA5String, +} + +# Type alias +_OidNameMap = typing.Mapping[ObjectIdentifier, str] +_NameOidMap = typing.Mapping[str, ObjectIdentifier] + +#: Short attribute names from RFC 4514: +#: https://tools.ietf.org/html/rfc4514#page-7 +_NAMEOID_TO_NAME: _OidNameMap = { + NameOID.COMMON_NAME: "CN", + NameOID.LOCALITY_NAME: "L", + NameOID.STATE_OR_PROVINCE_NAME: "ST", + NameOID.ORGANIZATION_NAME: "O", + NameOID.ORGANIZATIONAL_UNIT_NAME: "OU", + NameOID.COUNTRY_NAME: "C", + NameOID.STREET_ADDRESS: "STREET", + NameOID.DOMAIN_COMPONENT: "DC", + NameOID.USER_ID: "UID", +} +_NAME_TO_NAMEOID = {v: k for k, v in _NAMEOID_TO_NAME.items()} + +_NAMEOID_LENGTH_LIMIT = { + NameOID.COUNTRY_NAME: (2, 2), + NameOID.JURISDICTION_COUNTRY_NAME: (2, 2), + NameOID.COMMON_NAME: (1, 64), +} + + +def _escape_dn_value(val: str | bytes) -> str: + """Escape special characters in RFC4514 Distinguished Name value.""" + + if not val: + return "" + + # RFC 4514 Section 2.4 defines the value as being the # (U+0023) character + # followed by the hexadecimal encoding of the octets. + if isinstance(val, bytes): + return "#" + binascii.hexlify(val).decode("utf8") + + # See https://tools.ietf.org/html/rfc4514#section-2.4 + val = val.replace("\\", "\\\\") + val = val.replace('"', '\\"') + val = val.replace("+", "\\+") + val = val.replace(",", "\\,") + val = val.replace(";", "\\;") + val = val.replace("<", "\\<") + val = val.replace(">", "\\>") + val = val.replace("\0", "\\00") + + if val[0] in ("#", " "): + val = "\\" + val + if val[-1] == " ": + val = val[:-1] + "\\ " + + return val + + +def _unescape_dn_value(val: str) -> str: + if not val: + return "" + + # See https://tools.ietf.org/html/rfc4514#section-3 + + # special = escaped / SPACE / SHARP / EQUALS + # escaped = DQUOTE / PLUS / COMMA / SEMI / LANGLE / RANGLE + def sub(m): + val = m.group(1) + # Regular escape + if len(val) == 1: + return val + # Hex-value scape + return chr(int(val, 16)) + + return _RFC4514NameParser._PAIR_RE.sub(sub, val) + + +class NameAttribute: + def __init__( + self, + oid: ObjectIdentifier, + value: str | bytes, + _type: _ASN1Type | None = None, + *, + _validate: bool = True, + ) -> None: + if not isinstance(oid, ObjectIdentifier): + raise TypeError( + "oid argument must be an ObjectIdentifier instance." + ) + if _type == _ASN1Type.BitString: + if oid != NameOID.X500_UNIQUE_IDENTIFIER: + raise TypeError( + "oid must be X500_UNIQUE_IDENTIFIER for BitString type." + ) + if not isinstance(value, bytes): + raise TypeError("value must be bytes for BitString") + else: + if not isinstance(value, str): + raise TypeError("value argument must be a str") + + length_limits = _NAMEOID_LENGTH_LIMIT.get(oid) + if length_limits is not None: + min_length, max_length = length_limits + assert isinstance(value, str) + c_len = len(value.encode("utf8")) + if c_len < min_length or c_len > max_length: + msg = ( + f"Attribute's length must be >= {min_length} and " + f"<= {max_length}, but it was {c_len}" + ) + if _validate is True: + raise ValueError(msg) + else: + warnings.warn(msg, stacklevel=2) + + # The appropriate ASN1 string type varies by OID and is defined across + # multiple RFCs including 2459, 3280, and 5280. In general UTF8String + # is preferred (2459), but 3280 and 5280 specify several OIDs with + # alternate types. This means when we see the sentinel value we need + # to look up whether the OID has a non-UTF8 type. If it does, set it + # to that. Otherwise, UTF8! + if _type is None: + _type = _NAMEOID_DEFAULT_TYPE.get(oid, _ASN1Type.UTF8String) + + if not isinstance(_type, _ASN1Type): + raise TypeError("_type must be from the _ASN1Type enum") + + self._oid = oid + self._value = value + self._type = _type + + @property + def oid(self) -> ObjectIdentifier: + return self._oid + + @property + def value(self) -> str | bytes: + return self._value + + @property + def rfc4514_attribute_name(self) -> str: + """ + The short attribute name (for example "CN") if available, + otherwise the OID dotted string. + """ + return _NAMEOID_TO_NAME.get(self.oid, self.oid.dotted_string) + + def rfc4514_string( + self, attr_name_overrides: _OidNameMap | None = None + ) -> str: + """ + Format as RFC4514 Distinguished Name string. + + Use short attribute name if available, otherwise fall back to OID + dotted string. + """ + attr_name = ( + attr_name_overrides.get(self.oid) if attr_name_overrides else None + ) + if attr_name is None: + attr_name = self.rfc4514_attribute_name + + return f"{attr_name}={_escape_dn_value(self.value)}" + + def __eq__(self, other: object) -> bool: + if not isinstance(other, NameAttribute): + return NotImplemented + + return self.oid == other.oid and self.value == other.value + + def __hash__(self) -> int: + return hash((self.oid, self.value)) + + def __repr__(self) -> str: + return f"" + + +class RelativeDistinguishedName: + def __init__(self, attributes: typing.Iterable[NameAttribute]): + attributes = list(attributes) + if not attributes: + raise ValueError("a relative distinguished name cannot be empty") + if not all(isinstance(x, NameAttribute) for x in attributes): + raise TypeError("attributes must be an iterable of NameAttribute") + + # Keep list and frozenset to preserve attribute order where it matters + self._attributes = attributes + self._attribute_set = frozenset(attributes) + + if len(self._attribute_set) != len(attributes): + raise ValueError("duplicate attributes are not allowed") + + def get_attributes_for_oid( + self, oid: ObjectIdentifier + ) -> list[NameAttribute]: + return [i for i in self if i.oid == oid] + + def rfc4514_string( + self, attr_name_overrides: _OidNameMap | None = None + ) -> str: + """ + Format as RFC4514 Distinguished Name string. + + Within each RDN, attributes are joined by '+', although that is rarely + used in certificates. + """ + return "+".join( + attr.rfc4514_string(attr_name_overrides) + for attr in self._attributes + ) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, RelativeDistinguishedName): + return NotImplemented + + return self._attribute_set == other._attribute_set + + def __hash__(self) -> int: + return hash(self._attribute_set) + + def __iter__(self) -> typing.Iterator[NameAttribute]: + return iter(self._attributes) + + def __len__(self) -> int: + return len(self._attributes) + + def __repr__(self) -> str: + return f"" + + +class Name: + @typing.overload + def __init__(self, attributes: typing.Iterable[NameAttribute]) -> None: ... + + @typing.overload + def __init__( + self, attributes: typing.Iterable[RelativeDistinguishedName] + ) -> None: ... + + def __init__( + self, + attributes: typing.Iterable[NameAttribute | RelativeDistinguishedName], + ) -> None: + attributes = list(attributes) + if all(isinstance(x, NameAttribute) for x in attributes): + self._attributes = [ + RelativeDistinguishedName([typing.cast(NameAttribute, x)]) + for x in attributes + ] + elif all(isinstance(x, RelativeDistinguishedName) for x in attributes): + self._attributes = typing.cast( + typing.List[RelativeDistinguishedName], attributes + ) + else: + raise TypeError( + "attributes must be a list of NameAttribute" + " or a list RelativeDistinguishedName" + ) + + @classmethod + def from_rfc4514_string( + cls, + data: str, + attr_name_overrides: _NameOidMap | None = None, + ) -> Name: + return _RFC4514NameParser(data, attr_name_overrides or {}).parse() + + def rfc4514_string( + self, attr_name_overrides: _OidNameMap | None = None + ) -> str: + """ + Format as RFC4514 Distinguished Name string. + For example 'CN=foobar.com,O=Foo Corp,C=US' + + An X.509 name is a two-level structure: a list of sets of attributes. + Each list element is separated by ',' and within each list element, set + elements are separated by '+'. The latter is almost never used in + real world certificates. According to RFC4514 section 2.1 the + RDNSequence must be reversed when converting to string representation. + """ + return ",".join( + attr.rfc4514_string(attr_name_overrides) + for attr in reversed(self._attributes) + ) + + def get_attributes_for_oid( + self, oid: ObjectIdentifier + ) -> list[NameAttribute]: + return [i for i in self if i.oid == oid] + + @property + def rdns(self) -> list[RelativeDistinguishedName]: + return self._attributes + + def public_bytes(self, backend: typing.Any = None) -> bytes: + return rust_x509.encode_name_bytes(self) + + def __eq__(self, other: object) -> bool: + if not isinstance(other, Name): + return NotImplemented + + return self._attributes == other._attributes + + def __hash__(self) -> int: + # TODO: this is relatively expensive, if this looks like a bottleneck + # for you, consider optimizing! + return hash(tuple(self._attributes)) + + def __iter__(self) -> typing.Iterator[NameAttribute]: + for rdn in self._attributes: + yield from rdn + + def __len__(self) -> int: + return sum(len(rdn) for rdn in self._attributes) + + def __repr__(self) -> str: + rdns = ",".join(attr.rfc4514_string() for attr in self._attributes) + return f"" + + +class _RFC4514NameParser: + _OID_RE = re.compile(r"(0|([1-9]\d*))(\.(0|([1-9]\d*)))+") + _DESCR_RE = re.compile(r"[a-zA-Z][a-zA-Z\d-]*") + + _PAIR = r"\\([\\ #=\"\+,;<>]|[\da-zA-Z]{2})" + _PAIR_RE = re.compile(_PAIR) + _LUTF1 = r"[\x01-\x1f\x21\x24-\x2A\x2D-\x3A\x3D\x3F-\x5B\x5D-\x7F]" + _SUTF1 = r"[\x01-\x21\x23-\x2A\x2D-\x3A\x3D\x3F-\x5B\x5D-\x7F]" + _TUTF1 = r"[\x01-\x1F\x21\x23-\x2A\x2D-\x3A\x3D\x3F-\x5B\x5D-\x7F]" + _UTFMB = rf"[\x80-{chr(sys.maxunicode)}]" + _LEADCHAR = rf"{_LUTF1}|{_UTFMB}" + _STRINGCHAR = rf"{_SUTF1}|{_UTFMB}" + _TRAILCHAR = rf"{_TUTF1}|{_UTFMB}" + _STRING_RE = re.compile( + rf""" + ( + ({_LEADCHAR}|{_PAIR}) + ( + ({_STRINGCHAR}|{_PAIR})* + ({_TRAILCHAR}|{_PAIR}) + )? + )? + """, + re.VERBOSE, + ) + _HEXSTRING_RE = re.compile(r"#([\da-zA-Z]{2})+") + + def __init__(self, data: str, attr_name_overrides: _NameOidMap) -> None: + self._data = data + self._idx = 0 + + self._attr_name_overrides = attr_name_overrides + + def _has_data(self) -> bool: + return self._idx < len(self._data) + + def _peek(self) -> str | None: + if self._has_data(): + return self._data[self._idx] + return None + + def _read_char(self, ch: str) -> None: + if self._peek() != ch: + raise ValueError + self._idx += 1 + + def _read_re(self, pat) -> str: + match = pat.match(self._data, pos=self._idx) + if match is None: + raise ValueError + val = match.group() + self._idx += len(val) + return val + + def parse(self) -> Name: + """ + Parses the `data` string and converts it to a Name. + + According to RFC4514 section 2.1 the RDNSequence must be + reversed when converting to string representation. So, when + we parse it, we need to reverse again to get the RDNs on the + correct order. + """ + + if not self._has_data(): + return Name([]) + + rdns = [self._parse_rdn()] + + while self._has_data(): + self._read_char(",") + rdns.append(self._parse_rdn()) + + return Name(reversed(rdns)) + + def _parse_rdn(self) -> RelativeDistinguishedName: + nas = [self._parse_na()] + while self._peek() == "+": + self._read_char("+") + nas.append(self._parse_na()) + + return RelativeDistinguishedName(nas) + + def _parse_na(self) -> NameAttribute: + try: + oid_value = self._read_re(self._OID_RE) + except ValueError: + name = self._read_re(self._DESCR_RE) + oid = self._attr_name_overrides.get( + name, _NAME_TO_NAMEOID.get(name) + ) + if oid is None: + raise ValueError + else: + oid = ObjectIdentifier(oid_value) + + self._read_char("=") + if self._peek() == "#": + value = self._read_re(self._HEXSTRING_RE) + value = binascii.unhexlify(value[1:]).decode() + else: + raw_value = self._read_re(self._STRING_RE) + value = _unescape_dn_value(raw_value) + + return NameAttribute(oid, value) diff --git a/templates/skills/file_manager/dependencies/cryptography/x509/ocsp.py b/templates/skills/file_manager/dependencies/cryptography/x509/ocsp.py new file mode 100644 index 00000000..dbb475db --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/x509/ocsp.py @@ -0,0 +1,678 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import abc +import datetime +import typing + +from cryptography import utils, x509 +from cryptography.hazmat.bindings._rust import ocsp +from cryptography.hazmat.primitives import hashes, serialization +from cryptography.hazmat.primitives.asymmetric.types import ( + CertificateIssuerPrivateKeyTypes, +) +from cryptography.x509.base import ( + _EARLIEST_UTC_TIME, + _convert_to_naive_utc_time, + _reject_duplicate_extension, +) + + +class OCSPResponderEncoding(utils.Enum): + HASH = "By Hash" + NAME = "By Name" + + +class OCSPResponseStatus(utils.Enum): + SUCCESSFUL = 0 + MALFORMED_REQUEST = 1 + INTERNAL_ERROR = 2 + TRY_LATER = 3 + SIG_REQUIRED = 5 + UNAUTHORIZED = 6 + + +_ALLOWED_HASHES = ( + hashes.SHA1, + hashes.SHA224, + hashes.SHA256, + hashes.SHA384, + hashes.SHA512, +) + + +def _verify_algorithm(algorithm: hashes.HashAlgorithm) -> None: + if not isinstance(algorithm, _ALLOWED_HASHES): + raise ValueError( + "Algorithm must be SHA1, SHA224, SHA256, SHA384, or SHA512" + ) + + +class OCSPCertStatus(utils.Enum): + GOOD = 0 + REVOKED = 1 + UNKNOWN = 2 + + +class _SingleResponse: + def __init__( + self, + cert: x509.Certificate, + issuer: x509.Certificate, + algorithm: hashes.HashAlgorithm, + cert_status: OCSPCertStatus, + this_update: datetime.datetime, + next_update: datetime.datetime | None, + revocation_time: datetime.datetime | None, + revocation_reason: x509.ReasonFlags | None, + ): + if not isinstance(cert, x509.Certificate) or not isinstance( + issuer, x509.Certificate + ): + raise TypeError("cert and issuer must be a Certificate") + + _verify_algorithm(algorithm) + if not isinstance(this_update, datetime.datetime): + raise TypeError("this_update must be a datetime object") + if next_update is not None and not isinstance( + next_update, datetime.datetime + ): + raise TypeError("next_update must be a datetime object or None") + + self._cert = cert + self._issuer = issuer + self._algorithm = algorithm + self._this_update = this_update + self._next_update = next_update + + if not isinstance(cert_status, OCSPCertStatus): + raise TypeError( + "cert_status must be an item from the OCSPCertStatus enum" + ) + if cert_status is not OCSPCertStatus.REVOKED: + if revocation_time is not None: + raise ValueError( + "revocation_time can only be provided if the certificate " + "is revoked" + ) + if revocation_reason is not None: + raise ValueError( + "revocation_reason can only be provided if the certificate" + " is revoked" + ) + else: + if not isinstance(revocation_time, datetime.datetime): + raise TypeError("revocation_time must be a datetime object") + + revocation_time = _convert_to_naive_utc_time(revocation_time) + if revocation_time < _EARLIEST_UTC_TIME: + raise ValueError( + "The revocation_time must be on or after" + " 1950 January 1." + ) + + if revocation_reason is not None and not isinstance( + revocation_reason, x509.ReasonFlags + ): + raise TypeError( + "revocation_reason must be an item from the ReasonFlags " + "enum or None" + ) + + self._cert_status = cert_status + self._revocation_time = revocation_time + self._revocation_reason = revocation_reason + + +class OCSPRequest(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def issuer_key_hash(self) -> bytes: + """ + The hash of the issuer public key + """ + + @property + @abc.abstractmethod + def issuer_name_hash(self) -> bytes: + """ + The hash of the issuer name + """ + + @property + @abc.abstractmethod + def hash_algorithm(self) -> hashes.HashAlgorithm: + """ + The hash algorithm used in the issuer name and key hashes + """ + + @property + @abc.abstractmethod + def serial_number(self) -> int: + """ + The serial number of the cert whose status is being checked + """ + + @abc.abstractmethod + def public_bytes(self, encoding: serialization.Encoding) -> bytes: + """ + Serializes the request to DER + """ + + @property + @abc.abstractmethod + def extensions(self) -> x509.Extensions: + """ + The list of request extensions. Not single request extensions. + """ + + +class OCSPSingleResponse(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def certificate_status(self) -> OCSPCertStatus: + """ + The status of the certificate (an element from the OCSPCertStatus enum) + """ + + @property + @abc.abstractmethod + def revocation_time(self) -> datetime.datetime | None: + """ + The date of when the certificate was revoked or None if not + revoked. + """ + + @property + @abc.abstractmethod + def revocation_time_utc(self) -> datetime.datetime | None: + """ + The date of when the certificate was revoked or None if not + revoked. Represented as a non-naive UTC datetime. + """ + + @property + @abc.abstractmethod + def revocation_reason(self) -> x509.ReasonFlags | None: + """ + The reason the certificate was revoked or None if not specified or + not revoked. + """ + + @property + @abc.abstractmethod + def this_update(self) -> datetime.datetime: + """ + The most recent time at which the status being indicated is known by + the responder to have been correct + """ + + @property + @abc.abstractmethod + def this_update_utc(self) -> datetime.datetime: + """ + The most recent time at which the status being indicated is known by + the responder to have been correct. Represented as a non-naive UTC + datetime. + """ + + @property + @abc.abstractmethod + def next_update(self) -> datetime.datetime | None: + """ + The time when newer information will be available + """ + + @property + @abc.abstractmethod + def next_update_utc(self) -> datetime.datetime | None: + """ + The time when newer information will be available. Represented as a + non-naive UTC datetime. + """ + + @property + @abc.abstractmethod + def issuer_key_hash(self) -> bytes: + """ + The hash of the issuer public key + """ + + @property + @abc.abstractmethod + def issuer_name_hash(self) -> bytes: + """ + The hash of the issuer name + """ + + @property + @abc.abstractmethod + def hash_algorithm(self) -> hashes.HashAlgorithm: + """ + The hash algorithm used in the issuer name and key hashes + """ + + @property + @abc.abstractmethod + def serial_number(self) -> int: + """ + The serial number of the cert whose status is being checked + """ + + +class OCSPResponse(metaclass=abc.ABCMeta): + @property + @abc.abstractmethod + def responses(self) -> typing.Iterator[OCSPSingleResponse]: + """ + An iterator over the individual SINGLERESP structures in the + response + """ + + @property + @abc.abstractmethod + def response_status(self) -> OCSPResponseStatus: + """ + The status of the response. This is a value from the OCSPResponseStatus + enumeration + """ + + @property + @abc.abstractmethod + def signature_algorithm_oid(self) -> x509.ObjectIdentifier: + """ + The ObjectIdentifier of the signature algorithm + """ + + @property + @abc.abstractmethod + def signature_hash_algorithm( + self, + ) -> hashes.HashAlgorithm | None: + """ + Returns a HashAlgorithm corresponding to the type of the digest signed + """ + + @property + @abc.abstractmethod + def signature(self) -> bytes: + """ + The signature bytes + """ + + @property + @abc.abstractmethod + def tbs_response_bytes(self) -> bytes: + """ + The tbsResponseData bytes + """ + + @property + @abc.abstractmethod + def certificates(self) -> list[x509.Certificate]: + """ + A list of certificates used to help build a chain to verify the OCSP + response. This situation occurs when the OCSP responder uses a delegate + certificate. + """ + + @property + @abc.abstractmethod + def responder_key_hash(self) -> bytes | None: + """ + The responder's key hash or None + """ + + @property + @abc.abstractmethod + def responder_name(self) -> x509.Name | None: + """ + The responder's Name or None + """ + + @property + @abc.abstractmethod + def produced_at(self) -> datetime.datetime: + """ + The time the response was produced + """ + + @property + @abc.abstractmethod + def produced_at_utc(self) -> datetime.datetime: + """ + The time the response was produced. Represented as a non-naive UTC + datetime. + """ + + @property + @abc.abstractmethod + def certificate_status(self) -> OCSPCertStatus: + """ + The status of the certificate (an element from the OCSPCertStatus enum) + """ + + @property + @abc.abstractmethod + def revocation_time(self) -> datetime.datetime | None: + """ + The date of when the certificate was revoked or None if not + revoked. + """ + + @property + @abc.abstractmethod + def revocation_time_utc(self) -> datetime.datetime | None: + """ + The date of when the certificate was revoked or None if not + revoked. Represented as a non-naive UTC datetime. + """ + + @property + @abc.abstractmethod + def revocation_reason(self) -> x509.ReasonFlags | None: + """ + The reason the certificate was revoked or None if not specified or + not revoked. + """ + + @property + @abc.abstractmethod + def this_update(self) -> datetime.datetime: + """ + The most recent time at which the status being indicated is known by + the responder to have been correct + """ + + @property + @abc.abstractmethod + def this_update_utc(self) -> datetime.datetime: + """ + The most recent time at which the status being indicated is known by + the responder to have been correct. Represented as a non-naive UTC + datetime. + """ + + @property + @abc.abstractmethod + def next_update(self) -> datetime.datetime | None: + """ + The time when newer information will be available + """ + + @property + @abc.abstractmethod + def next_update_utc(self) -> datetime.datetime | None: + """ + The time when newer information will be available. Represented as a + non-naive UTC datetime. + """ + + @property + @abc.abstractmethod + def issuer_key_hash(self) -> bytes: + """ + The hash of the issuer public key + """ + + @property + @abc.abstractmethod + def issuer_name_hash(self) -> bytes: + """ + The hash of the issuer name + """ + + @property + @abc.abstractmethod + def hash_algorithm(self) -> hashes.HashAlgorithm: + """ + The hash algorithm used in the issuer name and key hashes + """ + + @property + @abc.abstractmethod + def serial_number(self) -> int: + """ + The serial number of the cert whose status is being checked + """ + + @property + @abc.abstractmethod + def extensions(self) -> x509.Extensions: + """ + The list of response extensions. Not single response extensions. + """ + + @property + @abc.abstractmethod + def single_extensions(self) -> x509.Extensions: + """ + The list of single response extensions. Not response extensions. + """ + + @abc.abstractmethod + def public_bytes(self, encoding: serialization.Encoding) -> bytes: + """ + Serializes the response to DER + """ + + +OCSPRequest.register(ocsp.OCSPRequest) +OCSPResponse.register(ocsp.OCSPResponse) +OCSPSingleResponse.register(ocsp.OCSPSingleResponse) + + +class OCSPRequestBuilder: + def __init__( + self, + request: tuple[ + x509.Certificate, x509.Certificate, hashes.HashAlgorithm + ] + | None = None, + request_hash: tuple[bytes, bytes, int, hashes.HashAlgorithm] + | None = None, + extensions: list[x509.Extension[x509.ExtensionType]] = [], + ) -> None: + self._request = request + self._request_hash = request_hash + self._extensions = extensions + + def add_certificate( + self, + cert: x509.Certificate, + issuer: x509.Certificate, + algorithm: hashes.HashAlgorithm, + ) -> OCSPRequestBuilder: + if self._request is not None or self._request_hash is not None: + raise ValueError("Only one certificate can be added to a request") + + _verify_algorithm(algorithm) + if not isinstance(cert, x509.Certificate) or not isinstance( + issuer, x509.Certificate + ): + raise TypeError("cert and issuer must be a Certificate") + + return OCSPRequestBuilder( + (cert, issuer, algorithm), self._request_hash, self._extensions + ) + + def add_certificate_by_hash( + self, + issuer_name_hash: bytes, + issuer_key_hash: bytes, + serial_number: int, + algorithm: hashes.HashAlgorithm, + ) -> OCSPRequestBuilder: + if self._request is not None or self._request_hash is not None: + raise ValueError("Only one certificate can be added to a request") + + if not isinstance(serial_number, int): + raise TypeError("serial_number must be an integer") + + _verify_algorithm(algorithm) + utils._check_bytes("issuer_name_hash", issuer_name_hash) + utils._check_bytes("issuer_key_hash", issuer_key_hash) + if algorithm.digest_size != len( + issuer_name_hash + ) or algorithm.digest_size != len(issuer_key_hash): + raise ValueError( + "issuer_name_hash and issuer_key_hash must be the same length " + "as the digest size of the algorithm" + ) + + return OCSPRequestBuilder( + self._request, + (issuer_name_hash, issuer_key_hash, serial_number, algorithm), + self._extensions, + ) + + def add_extension( + self, extval: x509.ExtensionType, critical: bool + ) -> OCSPRequestBuilder: + if not isinstance(extval, x509.ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = x509.Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + + return OCSPRequestBuilder( + self._request, self._request_hash, [*self._extensions, extension] + ) + + def build(self) -> OCSPRequest: + if self._request is None and self._request_hash is None: + raise ValueError("You must add a certificate before building") + + return ocsp.create_ocsp_request(self) + + +class OCSPResponseBuilder: + def __init__( + self, + response: _SingleResponse | None = None, + responder_id: tuple[x509.Certificate, OCSPResponderEncoding] + | None = None, + certs: list[x509.Certificate] | None = None, + extensions: list[x509.Extension[x509.ExtensionType]] = [], + ): + self._response = response + self._responder_id = responder_id + self._certs = certs + self._extensions = extensions + + def add_response( + self, + cert: x509.Certificate, + issuer: x509.Certificate, + algorithm: hashes.HashAlgorithm, + cert_status: OCSPCertStatus, + this_update: datetime.datetime, + next_update: datetime.datetime | None, + revocation_time: datetime.datetime | None, + revocation_reason: x509.ReasonFlags | None, + ) -> OCSPResponseBuilder: + if self._response is not None: + raise ValueError("Only one response per OCSPResponse.") + + singleresp = _SingleResponse( + cert, + issuer, + algorithm, + cert_status, + this_update, + next_update, + revocation_time, + revocation_reason, + ) + return OCSPResponseBuilder( + singleresp, + self._responder_id, + self._certs, + self._extensions, + ) + + def responder_id( + self, encoding: OCSPResponderEncoding, responder_cert: x509.Certificate + ) -> OCSPResponseBuilder: + if self._responder_id is not None: + raise ValueError("responder_id can only be set once") + if not isinstance(responder_cert, x509.Certificate): + raise TypeError("responder_cert must be a Certificate") + if not isinstance(encoding, OCSPResponderEncoding): + raise TypeError( + "encoding must be an element from OCSPResponderEncoding" + ) + + return OCSPResponseBuilder( + self._response, + (responder_cert, encoding), + self._certs, + self._extensions, + ) + + def certificates( + self, certs: typing.Iterable[x509.Certificate] + ) -> OCSPResponseBuilder: + if self._certs is not None: + raise ValueError("certificates may only be set once") + certs = list(certs) + if len(certs) == 0: + raise ValueError("certs must not be an empty list") + if not all(isinstance(x, x509.Certificate) for x in certs): + raise TypeError("certs must be a list of Certificates") + return OCSPResponseBuilder( + self._response, + self._responder_id, + certs, + self._extensions, + ) + + def add_extension( + self, extval: x509.ExtensionType, critical: bool + ) -> OCSPResponseBuilder: + if not isinstance(extval, x509.ExtensionType): + raise TypeError("extension must be an ExtensionType") + + extension = x509.Extension(extval.oid, critical, extval) + _reject_duplicate_extension(extension, self._extensions) + + return OCSPResponseBuilder( + self._response, + self._responder_id, + self._certs, + [*self._extensions, extension], + ) + + def sign( + self, + private_key: CertificateIssuerPrivateKeyTypes, + algorithm: hashes.HashAlgorithm | None, + ) -> OCSPResponse: + if self._response is None: + raise ValueError("You must add a response before signing") + if self._responder_id is None: + raise ValueError("You must add a responder_id before signing") + + return ocsp.create_ocsp_response( + OCSPResponseStatus.SUCCESSFUL, self, private_key, algorithm + ) + + @classmethod + def build_unsuccessful( + cls, response_status: OCSPResponseStatus + ) -> OCSPResponse: + if not isinstance(response_status, OCSPResponseStatus): + raise TypeError( + "response_status must be an item from OCSPResponseStatus" + ) + if response_status is OCSPResponseStatus.SUCCESSFUL: + raise ValueError("response_status cannot be SUCCESSFUL") + + return ocsp.create_ocsp_response(response_status, None, None, None) + + +load_der_ocsp_request = ocsp.load_der_ocsp_request +load_der_ocsp_response = ocsp.load_der_ocsp_response diff --git a/templates/skills/file_manager/dependencies/cryptography/x509/oid.py b/templates/skills/file_manager/dependencies/cryptography/x509/oid.py new file mode 100644 index 00000000..d4e409e0 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/x509/oid.py @@ -0,0 +1,35 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +from cryptography.hazmat._oid import ( + AttributeOID, + AuthorityInformationAccessOID, + CertificatePoliciesOID, + CRLEntryExtensionOID, + ExtendedKeyUsageOID, + ExtensionOID, + NameOID, + ObjectIdentifier, + OCSPExtensionOID, + PublicKeyAlgorithmOID, + SignatureAlgorithmOID, + SubjectInformationAccessOID, +) + +__all__ = [ + "AttributeOID", + "AuthorityInformationAccessOID", + "CRLEntryExtensionOID", + "CertificatePoliciesOID", + "ExtendedKeyUsageOID", + "ExtensionOID", + "NameOID", + "OCSPExtensionOID", + "ObjectIdentifier", + "PublicKeyAlgorithmOID", + "SignatureAlgorithmOID", + "SubjectInformationAccessOID", +] diff --git a/templates/skills/file_manager/dependencies/cryptography/x509/verification.py b/templates/skills/file_manager/dependencies/cryptography/x509/verification.py new file mode 100644 index 00000000..b8365068 --- /dev/null +++ b/templates/skills/file_manager/dependencies/cryptography/x509/verification.py @@ -0,0 +1,28 @@ +# This file is dual licensed under the terms of the Apache License, Version +# 2.0, and the BSD License. See the LICENSE file in the root of this repository +# for complete details. + +from __future__ import annotations + +import typing + +from cryptography.hazmat.bindings._rust import x509 as rust_x509 +from cryptography.x509.general_name import DNSName, IPAddress + +__all__ = [ + "ClientVerifier", + "PolicyBuilder", + "ServerVerifier", + "Store", + "Subject", + "VerificationError", + "VerifiedClient", +] + +Store = rust_x509.Store +Subject = typing.Union[DNSName, IPAddress] +VerifiedClient = rust_x509.VerifiedClient +ClientVerifier = rust_x509.ClientVerifier +ServerVerifier = rust_x509.ServerVerifier +PolicyBuilder = rust_x509.PolicyBuilder +VerificationError = rust_x509.VerificationError diff --git a/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/INSTALLER b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/LICENSE b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/LICENSE new file mode 100644 index 00000000..39400677 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/LICENSE @@ -0,0 +1,22 @@ +Copyright (c) 2004-2016 Yusuke Shinyama + +Permission is hereby granted, free of charge, to any person +obtaining a copy of this software and associated documentation +files (the "Software"), to deal in the Software without +restriction, including without limitation the rights to use, +copy, modify, merge, publish, distribute, sublicense, and/or +sell copies of the Software, and to permit persons to whom the +Software is furnished to do so, subject to the following +conditions: + +The above copyright notice and this permission notice shall be +included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY +KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE +WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR +PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR +COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR +OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE +SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/METADATA b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/METADATA new file mode 100644 index 00000000..9f806a78 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/METADATA @@ -0,0 +1,115 @@ +Metadata-Version: 2.1 +Name: pdfminer.six +Version: 20240706 +Summary: PDF parser and analyzer +Home-page: https://github.com/pdfminer/pdfminer.six +Author: Yusuke Shinyama + Philippe Guglielmetti +Author-email: pdfminer@goulu.net +License: MIT +Keywords: pdf parser,pdf converter,layout analysis,text mining +Platform: UNKNOWN +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3.12 +Classifier: Programming Language :: Python :: 3 :: Only +Classifier: Development Status :: 5 - Production/Stable +Classifier: Environment :: Console +Classifier: Intended Audience :: Developers +Classifier: Intended Audience :: Science/Research +Classifier: License :: OSI Approved :: MIT License +Classifier: Topic :: Text Processing +Requires-Python: >=3.8 +Description-Content-Type: text/markdown +License-File: LICENSE +Requires-Dist: charset-normalizer (>=2.0.0) +Requires-Dist: cryptography (>=36.0.0) +Provides-Extra: dev +Requires-Dist: atheris ; extra == 'dev' +Requires-Dist: black ; extra == 'dev' +Requires-Dist: mypy (==0.931) ; extra == 'dev' +Requires-Dist: nox ; extra == 'dev' +Requires-Dist: pytest ; extra == 'dev' +Provides-Extra: docs +Requires-Dist: sphinx ; extra == 'docs' +Requires-Dist: sphinx-argparse ; extra == 'docs' +Provides-Extra: image +Requires-Dist: Pillow ; extra == 'image' + +pdfminer.six +============ + +[![Continuous integration](https://github.com/pdfminer/pdfminer.six/actions/workflows/actions.yml/badge.svg)](https://github.com/pdfminer/pdfminer.six/actions/workflows/actions.yml) +[![PyPI version](https://img.shields.io/pypi/v/pdfminer.six.svg)](https://pypi.python.org/pypi/pdfminer.six/) +[![gitter](https://badges.gitter.im/pdfminer-six/Lobby.svg)](https://gitter.im/pdfminer-six/Lobby?utm_source=badge&utm_medium) + +*We fathom PDF* + +Pdfminer.six is a community maintained fork of the original PDFMiner. It is a tool for extracting information from PDF +documents. It focuses on getting and analyzing text data. Pdfminer.six extracts the text from a page directly from the +sourcecode of the PDF. It can also be used to get the exact location, font or color of the text. + +It is built in a modular way such that each component of pdfminer.six can be replaced easily. You can implement your own +interpreter or rendering device that uses the power of pdfminer.six for other purposes than text analysis. + +Check out the full documentation on +[Read the Docs](https://pdfminersix.readthedocs.io). + + +Features +-------- + +* Written entirely in Python. +* Parse, analyze, and convert PDF documents. +* Extract content as text, images, html or [hOCR](https://en.wikipedia.org/wiki/HOCR). +* PDF-1.7 specification support. (well, almost). +* CJK languages and vertical writing scripts support. +* Various font types (Type1, TrueType, Type3, and CID) support. +* Support for extracting images (JPG, JBIG2, Bitmaps). +* Support for various compressions (ASCIIHexDecode, ASCII85Decode, LZWDecode, FlateDecode, RunLengthDecode, + CCITTFaxDecode) +* Support for RC4 and AES encryption. +* Support for AcroForm interactive form extraction. +* Table of contents extraction. +* Tagged contents extraction. +* Automatic layout analysis. + +How to use +---------- + +* Install Python 3.8 or newer. +* Install pdfminer.six. + ```bash + pip install pdfminer.six + +* (Optionally) install extra dependencies for extracting images. + + ```bash + pip install 'pdfminer.six[image]' + +* Use the command-line interface to extract text from pdf. + + ```bash + pdf2txt.py example.pdf + +* Or use it with Python. + ```python + from pdfminer.high_level import extract_text + + text = extract_text("example.pdf") + print(text) + ``` + +Contributing +------------ + +Be sure to read the [contribution guidelines](https://github.com/pdfminer/pdfminer.six/blob/master/CONTRIBUTING.md). + +Acknowledgement +--------------- + +This repository includes code from `pyHanko` ; the original license has been included [here](/docs/licenses/LICENSE.pyHanko). + + diff --git a/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/RECORD b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/RECORD new file mode 100644 index 00000000..30d6e2ff --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/RECORD @@ -0,0 +1,222 @@ +../../bin/__pycache__/dumppdf.cpython-311.pyc,, +../../bin/__pycache__/pdf2txt.cpython-311.pyc,, +../../bin/dumppdf.py,sha256=V8BWlO1HMKdUS8DB4MpljW96zzj6WBxl_4ItnT2-OOc,14456 +../../bin/pdf2txt.py,sha256=UnSC1WMDQh_adY_wKLksw-_XQ3Rm119YaKg4CL61Y64,9895 +pdfminer.six-20240706.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +pdfminer.six-20240706.dist-info/LICENSE,sha256=8B5daZozdmtcr0pQmfKFarpPP3XqHfBdwFcrNLS-qPI,1093 +pdfminer.six-20240706.dist-info/METADATA,sha256=iKHD9kwo3TuFhJDXtwkaKFqRqWALGZ_yjunrmlAUBOM,4113 +pdfminer.six-20240706.dist-info/RECORD,, +pdfminer.six-20240706.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pdfminer.six-20240706.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +pdfminer.six-20240706.dist-info/top_level.txt,sha256=VXd4SVCY_kvBfNFxOQxjFvUTMNHEm88nX7ksfFtZark,9 +pdfminer/__init__.py,sha256=KiLNv1D8Esn8d4DS7MPQr7q3ghoObm8ahrA8lc77TU0,260 +pdfminer/__pycache__/__init__.cpython-311.pyc,, +pdfminer/__pycache__/_saslprep.cpython-311.pyc,, +pdfminer/__pycache__/arcfour.cpython-311.pyc,, +pdfminer/__pycache__/ascii85.cpython-311.pyc,, +pdfminer/__pycache__/ccitt.cpython-311.pyc,, +pdfminer/__pycache__/cmapdb.cpython-311.pyc,, +pdfminer/__pycache__/converter.cpython-311.pyc,, +pdfminer/__pycache__/data_structures.cpython-311.pyc,, +pdfminer/__pycache__/encodingdb.cpython-311.pyc,, +pdfminer/__pycache__/fontmetrics.cpython-311.pyc,, +pdfminer/__pycache__/glyphlist.cpython-311.pyc,, +pdfminer/__pycache__/high_level.cpython-311.pyc,, +pdfminer/__pycache__/image.cpython-311.pyc,, +pdfminer/__pycache__/jbig2.cpython-311.pyc,, +pdfminer/__pycache__/latin_enc.cpython-311.pyc,, +pdfminer/__pycache__/layout.cpython-311.pyc,, +pdfminer/__pycache__/lzw.cpython-311.pyc,, +pdfminer/__pycache__/pdfcolor.cpython-311.pyc,, +pdfminer/__pycache__/pdfdevice.cpython-311.pyc,, +pdfminer/__pycache__/pdfdocument.cpython-311.pyc,, +pdfminer/__pycache__/pdfexceptions.cpython-311.pyc,, +pdfminer/__pycache__/pdffont.cpython-311.pyc,, +pdfminer/__pycache__/pdfinterp.cpython-311.pyc,, +pdfminer/__pycache__/pdfpage.cpython-311.pyc,, +pdfminer/__pycache__/pdfparser.cpython-311.pyc,, +pdfminer/__pycache__/pdftypes.cpython-311.pyc,, +pdfminer/__pycache__/psexceptions.cpython-311.pyc,, +pdfminer/__pycache__/psparser.cpython-311.pyc,, +pdfminer/__pycache__/runlength.cpython-311.pyc,, +pdfminer/__pycache__/settings.cpython-311.pyc,, +pdfminer/__pycache__/utils.cpython-311.pyc,, +pdfminer/_saslprep.py,sha256=csRv9Pla5C4TnC3K8MQ1Wf3afjlQLO_oZA1qYS2GGgc,3648 +pdfminer/arcfour.py,sha256=iaqLmg8qly0KWRJHlchXe91s0B69rZN06cIcLKr0w9g,929 +pdfminer/ascii85.py,sha256=3DdOdfYaFTZGW2eANGsULLew3FxsSqbs7S5nmDzrLvU,2097 +pdfminer/ccitt.py,sha256=vhC7RC6ufqwwnzBfZ7vVOXCh-K1uRTVKUHvUvyTDJ3g,21519 +pdfminer/cmap/78-EUC-H.pickle.gz,sha256=p-H5oJQeZvVtBr3UsvEII3pmtVAbf6S16aCalkdUV-Q,20532 +pdfminer/cmap/78-EUC-V.pickle.gz,sha256=pUfiHM5pjSqBTL7UrulKRlI_mo_o8Mov0UUBc99umOw,20551 +pdfminer/cmap/78-H.pickle.gz,sha256=CWNfCjit7LNz51CuZPSD8bbUXQZDc-s0F7lLZfJIxdY,19882 +pdfminer/cmap/78-RKSJ-H.pickle.gz,sha256=WsIjhLmBlvAmBTV5JzZ7iKmv1TexUE61KnuBZzMWOgA,22969 +pdfminer/cmap/78-RKSJ-V.pickle.gz,sha256=EJJAshpJb_Bg3X7e3KeNpeOAMVfDwPsGq6FzR8pY8gA,22990 +pdfminer/cmap/78-V.pickle.gz,sha256=7yu-sp2ek38oK5EpJ57jm-PTwp0LBiwbxxppIUxodkA,19883 +pdfminer/cmap/78ms-RKSJ-H.pickle.gz,sha256=jOLtm7ym45AmGCMg6isOGKkCyJU068C2qq-PWF4XScQ,25942 +pdfminer/cmap/78ms-RKSJ-V.pickle.gz,sha256=w3ew7r3NZGDXMsRc6z7ADQtAiDRnjBKcDprA1S9N7Gc,25964 +pdfminer/cmap/83pv-RKSJ-H.pickle.gz,sha256=aotSZ4kFkrkYWoTNQ0CNVmkLk8cQG9Cpv1omofOdgDE,26305 +pdfminer/cmap/83pv-RKSJ-V.pickle.gz,sha256=3Sc6w8oBgIOIaZRpHTRTMnoN3Yfr_ndOm1c6aR8XVAc,26305 +pdfminer/cmap/90ms-RKSJ-H.pickle.gz,sha256=VALKJ1_1zbgQGGcnVH44dLMUYYV6tIwmmGxBMLXD2c8,25732 +pdfminer/cmap/90ms-RKSJ-V.pickle.gz,sha256=Z-w7DVVlRFSSsS97XqswF7hP1Gr3c29cMx5UJkvS-0k,25757 +pdfminer/cmap/90msp-RKSJ-H.pickle.gz,sha256=qFvIdQzvYhRnqkds2ZnmvF5mxCqvHHSghjLmwPmZVbM,25670 +pdfminer/cmap/90msp-RKSJ-V.pickle.gz,sha256=SOuXN7K7A2SZ8du7aCHnTyGPX9v_BYJfLhjeVRdBZew,25688 +pdfminer/cmap/90pv-RKSJ-H.pickle.gz,sha256=CMZGPmC0skunEchEwdKrruDT5fD5RSGY6-7hYSgaiPE,24226 +pdfminer/cmap/90pv-RKSJ-V.pickle.gz,sha256=CxyDje-Hs1GxwWZjRl_7GACXeidNhCtZMuxkzurTICA,24021 +pdfminer/cmap/Add-H.pickle.gz,sha256=SyAjvuziTJOOnGy0w1W-iNGbW4sNvU3uZO8GmRBGNEE,21027 +pdfminer/cmap/Add-RKSJ-H.pickle.gz,sha256=N4AD5AxFBjJ_MloLTevaRpNMqQNe3e1jrMpWvFV2REs,24275 +pdfminer/cmap/Add-RKSJ-V.pickle.gz,sha256=VYFX3uuJjemf4zKB_frNpcleJWUTpLpEV5qIYvrUttU,24079 +pdfminer/cmap/Add-V.pickle.gz,sha256=Glq3UqpM1KyxLO4xqbVlC5F_0Q--CosXqppXvPBlu4Y,20874 +pdfminer/cmap/B5-H.pickle.gz,sha256=mQ5Sx0WbDZw6bzSRYR0Via9ZgDmOBu5-2bLw3jnb_vQ,42594 +pdfminer/cmap/B5-V.pickle.gz,sha256=TDMK1CBDTGjieyexbEr3UDoNlLQ374xFVDVrs7Qer4o,42549 +pdfminer/cmap/B5pc-H.pickle.gz,sha256=Abd36NL2fLmcIE6HkugQwqTcOXYEFoEDcdH7Z2GUDhY,42602 +pdfminer/cmap/B5pc-V.pickle.gz,sha256=Xb_E2IjM1XVideRVmU8aKHKIxNjf4kyEW9WlIJSBAi8,42557 +pdfminer/cmap/CNS-EUC-H.pickle.gz,sha256=166XDDBel7vF2aEuOW_PXT3VNWPUG-QeS2nsu8eTueU,56990 +pdfminer/cmap/CNS-EUC-V.pickle.gz,sha256=AvPS6CYX2VJZlUj8wOqfbXHGXEcJ3LlnFfQJAaypRJE,56943 +pdfminer/cmap/CNS1-H.pickle.gz,sha256=xSWxUulrl6lYuHqf3DRwOeyRiIkqjV144p9FQ6_IQ-k,17615 +pdfminer/cmap/CNS1-V.pickle.gz,sha256=_YcNPgi_SQD-_S23Aczxt0LqRdNMevlaaQaJzsZhZKc,17564 +pdfminer/cmap/CNS2-H.pickle.gz,sha256=FVAPNB2ZWNDzV8zZXak2VkwvRuw_j5E6QM42Xe9YwRk,21723 +pdfminer/cmap/CNS2-V.pickle.gz,sha256=LLxuq10DoeYyOcM9MWr6O5xYMrdLRIj3QISSNH5Xiik,21723 +pdfminer/cmap/ETHK-B5-H.pickle.gz,sha256=C4bL4-sRW_eq4JEmhuiC-wLAX3AMURptcTqLCx5g35Q,59548 +pdfminer/cmap/ETHK-B5-V.pickle.gz,sha256=3GrBJp1X0UVffnnXa_4qq8NhoXlS8Gw8_Yypr-uqYMM,59481 +pdfminer/cmap/ETen-B5-H.pickle.gz,sha256=F1uOZHjWQeF3k8DDMfDW72DCh17uxKc5E1x0JyOsrJg,43982 +pdfminer/cmap/ETen-B5-V.pickle.gz,sha256=AqjF63JrrxubuTEzONR3yMogDEwGspkuQ9ymy-fUShI,43924 +pdfminer/cmap/ETenms-B5-H.pickle.gz,sha256=YF0QwNqDNu_iKUx9z3rDHSWvV7lSs0hKz6bZTLj68tA,320 +pdfminer/cmap/ETenms-B5-V.pickle.gz,sha256=D1s5y2dUfGFWSIO73op1IutKKErwzuO4n9_0CGMNXV0,438 +pdfminer/cmap/EUC-H.pickle.gz,sha256=8E0lZnJg9cGsfMP1jNCqj4_lm9L95Qzg4rxp0x96w9I,20429 +pdfminer/cmap/EUC-V.pickle.gz,sha256=wC6uTkiizfux7zi-tEdBL3shAoAYOCTJ1FLtGJWZJXw,20455 +pdfminer/cmap/Ext-H.pickle.gz,sha256=YH_-J4Q2FnYXMVr-b3su1MqaFR3aGOC49mniCL1qvG0,22272 +pdfminer/cmap/Ext-RKSJ-H.pickle.gz,sha256=J_Lod8oCTL3ivrr6OdMGTOLoLJuoBfUY6c0Ps8cOz0k,25721 +pdfminer/cmap/Ext-RKSJ-V.pickle.gz,sha256=9I71wM2D5NGI9HgGFqD6X9gW8XOIhQGAS8xRFhiFOTI,25750 +pdfminer/cmap/Ext-V.pickle.gz,sha256=dpXnvufST2ki9AiFGmv3z4ox3QmbED_rllDjdke6dPw,22307 +pdfminer/cmap/GB-EUC-H.pickle.gz,sha256=-SE2XoCf0z_BBxF3J0Ht0WSpo-vYoLyrr2GLy4PSD2I,22118 +pdfminer/cmap/GB-EUC-V.pickle.gz,sha256=qf6j9ISgm47D356u480Ny2yYkKGO_-7dUl1QVRzhz7c,22111 +pdfminer/cmap/GB-H.pickle.gz,sha256=1aSV_vEpjkNAS9fpgUOO14_oTwBIDBGeVUQJyKJwphw,21699 +pdfminer/cmap/GB-V.pickle.gz,sha256=P5N9GTqRJlbqdr4sT0plXo3rWGzOR4fDmcaSNX0Mvwo,21694 +pdfminer/cmap/GBK-EUC-H.pickle.gz,sha256=ioZB_vvWzzYhbkGWQzOE012NT7Py0WezwaFIGWjzA0k,68254 +pdfminer/cmap/GBK-EUC-V.pickle.gz,sha256=6tPTIfRfXUAbMhIaTABT9FL2JKkjKT6FgzRDDniNUzE,68199 +pdfminer/cmap/GBK2K-H.pickle.gz,sha256=wBuqDHiPPaqv9VaDTU1eMCFB67fsPybZ2RGGZmCxkC8,89917 +pdfminer/cmap/GBK2K-V.pickle.gz,sha256=ZAi_1nLBrm53lccuW7mv7qDSUQzZRHohw24v6AeJURU,89872 +pdfminer/cmap/GBKp-EUC-H.pickle.gz,sha256=NGCsroqeQGEbvBGC5Eg1gqdrBNRUUxm9PwoqW9g_rc8,68148 +pdfminer/cmap/GBKp-EUC-V.pickle.gz,sha256=45uveQsNk-hQT8TUm-8I6DGEC0Syq6K60iG9Pi-n8Kg,68102 +pdfminer/cmap/GBT-EUC-H.pickle.gz,sha256=XtxV1g4U3Krwkvl-tk_qEmaTsQT3_oL9G0NfJfM6f3o,23815 +pdfminer/cmap/GBT-EUC-V.pickle.gz,sha256=V1yteYUEaV_dOQFZI1psT3VMS8HHfAWBepvd4_BcieQ,23806 +pdfminer/cmap/GBT-H.pickle.gz,sha256=XLJf4-Yyvgv-9Lo8F8Mv9hAIuKommBnHKvIWX593TPg,23339 +pdfminer/cmap/GBT-V.pickle.gz,sha256=m6FvuKLzZmSh1sYnH4W9G9no9QGKzLXASmCNtXY0QYk,23322 +pdfminer/cmap/GBTpc-EUC-H.pickle.gz,sha256=PIJVdc84NSiEspGVQ5EfqFgJd9visB1mE5sgnSdExnw,23650 +pdfminer/cmap/GBTpc-EUC-V.pickle.gz,sha256=rik4aMR0rof3ox4klA-Yv_iS6pd-aJ7qhvShD-xBKrw,23647 +pdfminer/cmap/GBpc-EUC-H.pickle.gz,sha256=Zk1C0Dd-WIRREyQJ8LEMTJlJevQdyavYXkh5QwjUQ4Y,21945 +pdfminer/cmap/GBpc-EUC-V.pickle.gz,sha256=-nutvkDVFbHP6vYBWL9QikX4DVBBSFX9GNA_Nxuo86M,21956 +pdfminer/cmap/H.pickle.gz,sha256=ZQ74NI8-K7Kp6SJQ158K7QK3MB7fTTeVeOpNZ6wCykY,19781 +pdfminer/cmap/HKdla-B5-H.pickle.gz,sha256=Y1Rpn-i0M8LOU55lI2DMdYY3hiLRFj5QeC1YpePoiUM,45212 +pdfminer/cmap/HKdla-B5-V.pickle.gz,sha256=gwngsRVvWJEaqW7KDt57wKvbmj4GdIVdxROA61Kyg9g,45167 +pdfminer/cmap/HKdlb-B5-H.pickle.gz,sha256=qVVnznD4Vg8G1DAROc7J8Ej5dd8HBtaVb02FaSG-bZg,44853 +pdfminer/cmap/HKdlb-B5-V.pickle.gz,sha256=sUIj-J3mOMS02jrukHGfWWI54LSc30SpXC1J_DgXflQ,44816 +pdfminer/cmap/HKgccs-B5-H.pickle.gz,sha256=CqyvxTLeH7mGjRVqcR3h1s5XcY_H7hYlr9z2KLprJ58,53104 +pdfminer/cmap/HKgccs-B5-V.pickle.gz,sha256=sTNTH5BdrdgifLAXn7nroGYVH49tFLxAximhjZp_lE8,53050 +pdfminer/cmap/HKm314-B5-H.pickle.gz,sha256=e9V_U8u17R6LJG2ASKiwGf_rm5wgBtb1aE7cLx5KeRA,43667 +pdfminer/cmap/HKm314-B5-V.pickle.gz,sha256=3_d_39_ai9wjeF8LDaBg4rOe59rFkDYNB4E_K-frkXI,43618 +pdfminer/cmap/HKm471-B5-H.pickle.gz,sha256=IIvA6BBDBHwVLbrfgR--w_Uu89K3-aJbXj5yjrHFHb4,44187 +pdfminer/cmap/HKm471-B5-V.pickle.gz,sha256=GclaubMo-jv8yA-XdrHnaybGuyImlhG2mRimQ6qFnPI,44144 +pdfminer/cmap/HKscs-B5-H.pickle.gz,sha256=LRo0Y7qvPugQkFk3quY1mdIR-NJ1s33YmfvON14y_y4,59508 +pdfminer/cmap/HKscs-B5-V.pickle.gz,sha256=NTUgH6zyi75VPSYrrqL04BGUY0Xu8d0_VJ2z3bFBp2w,59473 +pdfminer/cmap/Hankaku-H.pickle.gz,sha256=L7PxMeav2Y0QPyoxPiHASlFCGkb7M_QweBETpKyhG1M,840 +pdfminer/cmap/Hankaku-V.pickle.gz,sha256=t2vtUGpVk6DG5xXi5lm44bFX3sugLEJrvDkWXmoA0qg,839 +pdfminer/cmap/Hiragana-H.pickle.gz,sha256=rVg6N97lVCDkICDkrDYozAKJdUGTg685PMHwBHANDH8,391 +pdfminer/cmap/Hiragana-V.pickle.gz,sha256=0wcbWMv19gwLKIxtRaMDuyWcVa-Hcj67Z9yvQctk6h8,391 +pdfminer/cmap/KSC-EUC-H.pickle.gz,sha256=uZL4yH6srJ3oRZ4hFI_aKPMl330wfPsO2WBQXIAGCQs,24040 +pdfminer/cmap/KSC-EUC-V.pickle.gz,sha256=-7Pfq-LcZT_TZU3WS3-5Lbuvj-GY71P8_BWzsZf6aBc,24078 +pdfminer/cmap/KSC-H.pickle.gz,sha256=aGH76sm16PHyb3Rf6ROcis0Tl1OtfeTAuw5aID7sXyU,23563 +pdfminer/cmap/KSC-Johab-H.pickle.gz,sha256=uP4gM4TPsHrIkGunTwzYUDlpgLlpmy4f-cO3IYRD_rY,55016 +pdfminer/cmap/KSC-Johab-V.pickle.gz,sha256=vpi9cbVuuQacv9TaUsnRYwdXeUA2HuB_-ni1mTQQ-ZI,55041 +pdfminer/cmap/KSC-V.pickle.gz,sha256=IcFaeDaVjdI0bdoPpHZJ1qJejFiv7YKsqgJQRyOlIr4,23644 +pdfminer/cmap/KSCms-UHC-H.pickle.gz,sha256=bpZdSie9WTH8d7ZGoSnwRZv0-FUBc0bNObbfdBBlVpU,51667 +pdfminer/cmap/KSCms-UHC-HW-H.pickle.gz,sha256=tUhT5c1sMPlvlBlU_BoHiti2KLbdAS_qFcPpns5ALCM,51788 +pdfminer/cmap/KSCms-UHC-HW-V.pickle.gz,sha256=VtxjlSD04SnouezLDt6cPzCn3klUAlYAcZmlVCJsVxc,51821 +pdfminer/cmap/KSCms-UHC-V.pickle.gz,sha256=50W1LLGXNZn_jT45xt8qapJBPMs8mDGuPP25E2eYxA4,51698 +pdfminer/cmap/KSCpc-EUC-H.pickle.gz,sha256=ED3EDVyziu1emQzDz9iUuhWTq5TM4oA_SMbVU2QRF4k,27769 +pdfminer/cmap/KSCpc-EUC-V.pickle.gz,sha256=88mD1sZqsKHed5u6qLFugoC89131TbTEH08zZ7rF7u4,27820 +pdfminer/cmap/Katakana-H.pickle.gz,sha256=AIGpUBtAh5xj1qJ2-QvKTsDWsFzChNh26aQoSbdfbeY,404 +pdfminer/cmap/Katakana-V.pickle.gz,sha256=Guqjgg1bndavYb7SFpDV7dLY5o0_MoEKUV1ywRupcUY,404 +pdfminer/cmap/NWP-H.pickle.gz,sha256=-LBJkMdB427qYetfjRO8mfXIXjp0JQwkygpXF_baSNA,21708 +pdfminer/cmap/NWP-V.pickle.gz,sha256=tsX_-GBBH9i9DoZ_AXyzDf675DODpdQIVt6nJzs7YeQ,21779 +pdfminer/cmap/RKSJ-H.pickle.gz,sha256=BO6-GAp21Mv5uzB2bz9uajS9QF6fPSSJpKIktuqLQYM,23030 +pdfminer/cmap/RKSJ-V.pickle.gz,sha256=6CFi9eF6Uw0wukKG6M_l_XsGwVuyS9NcJP_7DAv-b64,23048 +pdfminer/cmap/Roman-H.pickle.gz,sha256=wUVg9Vb937XOY5aSAtDa35ReHSmcg7Si0PY7wytiGuU,394 +pdfminer/cmap/Roman-V.pickle.gz,sha256=nDhOytlcHYT4IX-21F5pHy3W0yho6jduHNq1tcrdNHM,394 +pdfminer/cmap/UniCNS-UCS2-H.pickle.gz,sha256=m_y6ITSM0vKF2nFt6MuMq6h5LrfA2fni1Y4YjGpPxUA,67459 +pdfminer/cmap/UniCNS-UCS2-V.pickle.gz,sha256=ov9AoPYty5-ghwt2Kkrj350Xpak01ketz2C8h7m2qaw,67395 +pdfminer/cmap/UniCNS-UTF16-H.pickle.gz,sha256=PQRv0JPm4cL6rMVddvfhAATmBNxcDVY6cnmK-YgLF48,87819 +pdfminer/cmap/UniCNS-UTF16-V.pickle.gz,sha256=--nYmhOCI2FdCrOToCsXpQPi4jSo6UWYMJGMwrGuPJU,87751 +pdfminer/cmap/UniCNS-UTF32-H.pickle.gz,sha256=3rcksg29lStr2BX1w0xJ6gkhNpkFGr9nY2FuZ-_DjME,87400 +pdfminer/cmap/UniCNS-UTF32-V.pickle.gz,sha256=dxoHaCs0mm4795QOp28Wf84T-NSOhD4u9WyDw2VjvBM,87327 +pdfminer/cmap/UniCNS-UTF8-H.pickle.gz,sha256=ojwAZrFFAMX9k9apb89NAHV3Zsr3EFMvjCcsYGS-WrM,82631 +pdfminer/cmap/UniCNS-UTF8-V.pickle.gz,sha256=y4qj4PaSFn3SNKqxxZiZfYIlM_OuoOYUdNRmxxwQCtU,82562 +pdfminer/cmap/UniGB-UCS2-H.pickle.gz,sha256=DjOmVHAt1WSOiOEI9pzr-TmV8x2-v8F9kq3B6FlJ3Zk,97445 +pdfminer/cmap/UniGB-UCS2-V.pickle.gz,sha256=posfIJGPaMPxItpO0fax3Cbk5x0MVmXIUB2C5JKK2VY,97441 +pdfminer/cmap/UniGB-UTF16-H.pickle.gz,sha256=LYlvOMr_tiDDtknEgeIpateGPLU1LPnFsFBHa7lcQI8,101459 +pdfminer/cmap/UniGB-UTF16-V.pickle.gz,sha256=IzoW_D-lucto3IePp_70-3Xs8p3e3SbmxAMMCYrBUK0,101331 +pdfminer/cmap/UniGB-UTF32-H.pickle.gz,sha256=K_1ND9g4zLbgYXM9FLtrzc_GeF-e670aobRLJnUeVik,101490 +pdfminer/cmap/UniGB-UTF32-V.pickle.gz,sha256=_SmckzX8BWIrX0uB670AOkxOYQoUQjtkqXRHxasCriw,101357 +pdfminer/cmap/UniGB-UTF8-H.pickle.gz,sha256=EV5IHeSd1uHyjIg_jm-EyLRNLEG040zpRdNEU34F420,90500 +pdfminer/cmap/UniGB-UTF8-V.pickle.gz,sha256=aklNQPXEaW68RxEzelycdaKcVwmhvOJz50vLluYOr7g,90368 +pdfminer/cmap/UniJIS-UCS2-H.pickle.gz,sha256=aQ9Hu4sQiKkOtyWq7EbQrk5JUU00Yxs5eBtOAgktoQs,35934 +pdfminer/cmap/UniJIS-UCS2-HW-H.pickle.gz,sha256=qZtQ6a-GZPE1WpwR6FSgaIVyxgztp8mKWb1B8YYCNIQ,412 +pdfminer/cmap/UniJIS-UCS2-HW-V.pickle.gz,sha256=Jnz_vrQbdZ1pHc5-k1jRoES-jRPBs6bxip8vE0dRoS0,1402 +pdfminer/cmap/UniJIS-UCS2-V.pickle.gz,sha256=_qOQC-mIUJjI8Oz_1jZLDKdJJdR-irji4LQDsBmDr-g,35852 +pdfminer/cmap/UniJIS-UTF16-H.pickle.gz,sha256=CCiWiS3xFQvRilfiMiJ53PoDtf6rsjY7VWF7pqTZYAQ,58054 +pdfminer/cmap/UniJIS-UTF16-V.pickle.gz,sha256=vvqmwOF9m8MxnjxXsQIIsZ3Z263EtCbrfttwBOW_Wk4,57928 +pdfminer/cmap/UniJIS-UTF32-H.pickle.gz,sha256=fxActrCuZnIaMJKtI_Mksp2cTgXOEUt6KCX6zYB-uc0,57910 +pdfminer/cmap/UniJIS-UTF32-V.pickle.gz,sha256=bqU4hejs6mgb85FVVHStiDUby4THW7Mlnd-D9GgwuEA,57780 +pdfminer/cmap/UniJIS-UTF8-H.pickle.gz,sha256=q41SXodupkIqc-yzDA1UXpNW35asak2RWVT4wLDTgFA,54764 +pdfminer/cmap/UniJIS-UTF8-V.pickle.gz,sha256=Vqodr91noZidwGCz_ynx9nHo0SpQJ1Rc0EZ7pSsJOyo,54684 +pdfminer/cmap/UniJIS2004-UTF16-H.pickle.gz,sha256=WgT9_Y67pkL9xaVc0RxBWodWduKmkWB5gkt86Urd8Yk,58081 +pdfminer/cmap/UniJIS2004-UTF16-V.pickle.gz,sha256=iMUEWyJln24NdmjGwGuaMJLvgeAZplyZDdpa9AtuR24,57960 +pdfminer/cmap/UniJIS2004-UTF32-H.pickle.gz,sha256=gQ4cQYGlJaJOjxV5v2NLjL98XO09lYB_YnMKOs1R094,57940 +pdfminer/cmap/UniJIS2004-UTF32-V.pickle.gz,sha256=LdThzq04QscvgS5cG9tSocP8u7yO7wvkBELXBLqLKa8,57811 +pdfminer/cmap/UniJIS2004-UTF8-H.pickle.gz,sha256=_aqNcmiSLaStpZlTvaYfaM1X2wUU-P_9zaQGHaWvsLw,54829 +pdfminer/cmap/UniJIS2004-UTF8-V.pickle.gz,sha256=IyWqxjSl6zTnsGWGiO65vT7sY3aEfNstlgd2oXmIquk,54749 +pdfminer/cmap/UniJISX0213-UTF32-H.pickle.gz,sha256=XAXARSHjglvpylepsUYnbvF5j94o76d01sa9xmEUkYw,57903 +pdfminer/cmap/UniJISX0213-UTF32-V.pickle.gz,sha256=jF7Oxfk_LXh-jc2HmCsdacvQEmClIzEZpidTEApBfGY,57778 +pdfminer/cmap/UniJISX02132004-UTF32-H.pickle.gz,sha256=qQFdDxir7UtxFsb0Cgs8vWVvozaKeuBBjN2yyo6vjTA,57930 +pdfminer/cmap/UniJISX02132004-UTF32-V.pickle.gz,sha256=8-J0FsF61w5br4yWiSMkFv-al-ACgezaZfoJXF1EQ1E,57808 +pdfminer/cmap/UniKS-UCS2-H.pickle.gz,sha256=JbL2Il9leKbvqIerx8lpLA0QsPfoDP3zsvwKOyQ27vE,60683 +pdfminer/cmap/UniKS-UCS2-V.pickle.gz,sha256=hOFPQIQMjw9WhQPqc--dbbcQ5w-nJhXR_aQJaXxoPL0,60699 +pdfminer/cmap/UniKS-UTF16-H.pickle.gz,sha256=5IwJLqBbZj1EUKFWim0olj4wCWCXSKb6wiXQCqqOM48,61278 +pdfminer/cmap/UniKS-UTF16-V.pickle.gz,sha256=UiwBLZetOZ-eiG1QJpRY8E3YgBQImPMYrQgkoJZ5oCM,61298 +pdfminer/cmap/UniKS-UTF32-H.pickle.gz,sha256=KprUTxsH6ALWJ1Mg9ry5SPdHOBFMBknQ5F4kPEGWQh8,61286 +pdfminer/cmap/UniKS-UTF32-V.pickle.gz,sha256=j2e9w7eYP1cNfN_b-JafCWc-4Heho_8FdXZK3G1KBXU,61309 +pdfminer/cmap/UniKS-UTF8-H.pickle.gz,sha256=mG6K1pP09q0WGAkI9cAZoXzL2Dp3_UMt40rI5J1vWkU,54151 +pdfminer/cmap/UniKS-UTF8-V.pickle.gz,sha256=3biVO5E13UO_f7T3sw1xdphwwaOi2ZFeQtoaIVCAc1c,54172 +pdfminer/cmap/V.pickle.gz,sha256=l6nD-Mh1-4paWVH0afQlqQIjcxTOpIeojilD-zg8xMQ,19826 +pdfminer/cmap/WP-Symbol-H.pickle.gz,sha256=RS0_FNUVeLkKBHWUnwKHUb4JtsD3geDA8yWcl7PPmUY,505 +pdfminer/cmap/WP-Symbol-V.pickle.gz,sha256=Bg_eJy2a62sooZBnXlpC2yw_H1hpWuAO7s7UFZA9m8Q,505 +pdfminer/cmap/to-unicode-Adobe-CNS1.pickle.gz,sha256=CJ-VwvpWRH5ouwtWGbtJK0kRmP0uxepsrOCXaUFC3mg,138237 +pdfminer/cmap/to-unicode-Adobe-GB1.pickle.gz,sha256=BrP4vh-rT7jl3UjTMDN_1MMSknKbCiRBN5gtJSHk0w0,204425 +pdfminer/cmap/to-unicode-Adobe-Japan1.pickle.gz,sha256=3k49Hb5OIg9kBvMScK9zG3ZQ6NsqVjEBjm79CSOE0FM,112987 +pdfminer/cmap/to-unicode-Adobe-Korea1.pickle.gz,sha256=HXKMj2os-jZE_jZY0oR3gQAe_vR8j-t16tT_PwIfMJ4,120859 +pdfminer/cmapdb.py,sha256=w0WIqxztuCxNV4ZF3BatFKAOEJaQDv_N5CHS5DsgQZI,16083 +pdfminer/converter.py,sha256=BUW2so24XDfJ64vwDEBqPezV5H1K9i_oX_vzz75nfZw,35992 +pdfminer/data_structures.py,sha256=qPFDgBMB0T7HARKWpOqemKpl_xuGdmyp2qREiIKYvTA,1654 +pdfminer/encodingdb.py,sha256=4VGsyLmLIzgJOoIzBntvDNAp_et6m22rjDT_b5AzRAg,4031 +pdfminer/fontmetrics.py,sha256=te2P8f2882Kz5D6M0TWZA6QTALlf2eCl_chwk-dGl_Q,112596 +pdfminer/glyphlist.py,sha256=imgETwSUEAHyPY7vxZaEp0rNFmcbnKmqQZasxcqASVg,130799 +pdfminer/high_level.py,sha256=FECcfHfBxpmPyEThiovEXKyde8PtXrAA0W4oIGl8xQw,7320 +pdfminer/image.py,sha256=HJ_4iKq8mUClHRNt6OKno_6h2C34U-MX1ftXSIhD4qc,9212 +pdfminer/jbig2.py,sha256=1lZTQYsO8XB5hurOIGzx1KOJIVSa86p08bvmEFJObb8,11441 +pdfminer/latin_enc.py,sha256=ZqLnRnUxtzoihEv2RN1RK3pmScG-QWztGYMyTOknoGg,8531 +pdfminer/layout.py,sha256=F8AhIpcw-GAu42ATfnTU_Nzh4RAOTnz6I6CDuBuPUa4,34788 +pdfminer/lzw.py,sha256=D6EWWBq62owERAa0eCUmdeoziSDb7LR7f1K-MRlZGk0,3237 +pdfminer/pdfcolor.py,sha256=DT306UTBjWK1nrzxtU5OANGqtn8nOlGhLhNtlpCH5VE,821 +pdfminer/pdfdevice.py,sha256=tz39F-tOO7zY-WFQodhGHl-McUkte0c7gzLwi4ZhIsI,8767 +pdfminer/pdfdocument.py,sha256=MTpXdP0pfAeuhY5_A_wnjr2-lfMTn8ukUptdiAZD-CE,37244 +pdfminer/pdfexceptions.py,sha256=Pu1NRCz0XHgI8FOf4B9xusrj7xPQkeMOGkOUSAovyHw,490 +pdfminer/pdffont.py,sha256=gwv0vj9SdROVd6z5awz3s809u7CJKCFTXvzRpkgWiu0,38001 +pdfminer/pdfinterp.py,sha256=QirF6ZmnodXkMfvcOZVwWfJ8t3IFOXZuU_gPaE8zJGM,34540 +pdfminer/pdfpage.py,sha256=uCsNTpLRiX8_io-DdSlHane287SYv8OelbJXDLQpYvE,6937 +pdfminer/pdfparser.py,sha256=Gj69QHmi_TDpRa5qNhVDJ61cVPcsATlsk1Bggw7FroY,5884 +pdfminer/pdftypes.py,sha256=RlD4Esle5rleKXaqatIfso_5avLK-wRFHRBoki7-nGA,11771 +pdfminer/psexceptions.py,sha256=GBr91CbVxsjd1kMaOGf7TP6nh1nVmsjl0CNS7dwzJtw,208 +pdfminer/psparser.py,sha256=FlokRjlImVg6BF0MRE6RNBIKgjvBh3F-ajuhSrJ2v_4,19945 +pdfminer/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pdfminer/runlength.py,sha256=ckhlxq8W7EFI91QqdFwBo2cHyAcYRJpm7pl7gFeOMHg,1358 +pdfminer/settings.py,sha256=OoU2rI8aVOcccVrJAxOOTCXZZIscjieuC9VssYgXXmY,15 +pdfminer/utils.py,sha256=XR5EyW8483Emwkk5__UCgz_9Y-4RDwEXb7Btx16WjxY,20823 diff --git a/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/REQUESTED b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/REQUESTED new file mode 100644 index 00000000..e69de29b diff --git a/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/WHEEL b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/top_level.txt b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/top_level.txt new file mode 100644 index 00000000..0cce8635 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer.six-20240706.dist-info/top_level.txt @@ -0,0 +1 @@ +pdfminer diff --git a/templates/skills/file_manager/dependencies/pdfminer/__init__.py b/templates/skills/file_manager/dependencies/pdfminer/__init__.py new file mode 100644 index 00000000..5bd4d50a --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/__init__.py @@ -0,0 +1,10 @@ +from importlib.metadata import version, PackageNotFoundError + +try: + __version__ = version("pdfminer.six") +except PackageNotFoundError: + # package is not installed, return default + __version__ = "0.0" + +if __name__ == "__main__": + print(__version__) diff --git a/templates/skills/file_manager/dependencies/pdfminer/_saslprep.py b/templates/skills/file_manager/dependencies/pdfminer/_saslprep.py new file mode 100644 index 00000000..d56ca16b --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/_saslprep.py @@ -0,0 +1,97 @@ +# Copyright 2016-present MongoDB, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# Some changes copyright 2021-present Matthias Valvekens, +# licensed under the license of the pyHanko project (see LICENSE file). + + +"""An implementation of RFC4013 SASLprep.""" + +__all__ = ["saslprep"] + +import stringprep +from typing import Callable, Tuple +import unicodedata + +from .pdfexceptions import PDFValueError + +# RFC4013 section 2.3 prohibited output. +_PROHIBITED: Tuple[Callable[[str], bool], ...] = ( + # A strict reading of RFC 4013 requires table c12 here, but + # characters from it are mapped to SPACE in the Map step. Can + # normalization reintroduce them somehow? + stringprep.in_table_c12, + stringprep.in_table_c21_c22, + stringprep.in_table_c3, + stringprep.in_table_c4, + stringprep.in_table_c5, + stringprep.in_table_c6, + stringprep.in_table_c7, + stringprep.in_table_c8, + stringprep.in_table_c9, +) + + +def saslprep(data: str, prohibit_unassigned_code_points: bool = True) -> str: + """An implementation of RFC4013 SASLprep. + :param data: + The string to SASLprep. + :param prohibit_unassigned_code_points: + RFC 3454 and RFCs for various SASL mechanisms distinguish between + `queries` (unassigned code points allowed) and + `stored strings` (unassigned code points prohibited). Defaults + to ``True`` (unassigned code points are prohibited). + :return: The SASLprep'ed version of `data`. + """ + if prohibit_unassigned_code_points: + prohibited = _PROHIBITED + (stringprep.in_table_a1,) + else: + prohibited = _PROHIBITED + + # RFC3454 section 2, step 1 - Map + # RFC4013 section 2.1 mappings + # Map Non-ASCII space characters to SPACE (U+0020). Map + # commonly mapped to nothing characters to, well, nothing. + in_table_c12 = stringprep.in_table_c12 + in_table_b1 = stringprep.in_table_b1 + data = "".join( + ["\u0020" if in_table_c12(elt) else elt for elt in data if not in_table_b1(elt)] + ) + + # RFC3454 section 2, step 2 - Normalize + # RFC4013 section 2.2 normalization + data = unicodedata.ucd_3_2_0.normalize("NFKC", data) + + in_table_d1 = stringprep.in_table_d1 + if in_table_d1(data[0]): + if not in_table_d1(data[-1]): + # RFC3454, Section 6, #3. If a string contains any + # RandALCat character, the first and last characters + # MUST be RandALCat characters. + raise PDFValueError("SASLprep: failed bidirectional check") + # RFC3454, Section 6, #2. If a string contains any RandALCat + # character, it MUST NOT contain any LCat character. + prohibited = prohibited + (stringprep.in_table_d2,) + else: + # RFC3454, Section 6, #3. Following the logic of #3, if + # the first character is not a RandALCat, no other character + # can be either. + prohibited = prohibited + (in_table_d1,) + + # RFC3454 section 2, step 3 and 4 - Prohibit and check bidi + for char in data: + if any(in_table(char) for in_table in prohibited): + raise PDFValueError("SASLprep: failed prohibited character check") + + return data diff --git a/templates/skills/file_manager/dependencies/pdfminer/arcfour.py b/templates/skills/file_manager/dependencies/pdfminer/arcfour.py new file mode 100644 index 00000000..a767667f --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/arcfour.py @@ -0,0 +1,36 @@ +""" Python implementation of Arcfour encryption algorithm. +See https://en.wikipedia.org/wiki/RC4 +This code is in the public domain. + +""" + + +from typing import Sequence + + +class Arcfour: + def __init__(self, key: Sequence[int]) -> None: + # because Py3 range is not indexable + s = [i for i in range(256)] + j = 0 + klen = len(key) + for i in range(256): + j = (j + s[i] + key[i % klen]) % 256 + (s[i], s[j]) = (s[j], s[i]) + self.s = s + (self.i, self.j) = (0, 0) + + def process(self, data: bytes) -> bytes: + (i, j) = (self.i, self.j) + s = self.s + r = b"" + for c in iter(data): + i = (i + 1) % 256 + j = (j + s[i]) % 256 + (s[i], s[j]) = (s[j], s[i]) + k = s[(s[i] + s[j]) % 256] + r += bytes((c ^ k,)) + (self.i, self.j) = (i, j) + return r + + encrypt = decrypt = process diff --git a/templates/skills/file_manager/dependencies/pdfminer/ascii85.py b/templates/skills/file_manager/dependencies/pdfminer/ascii85.py new file mode 100644 index 00000000..dbe3d2a2 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/ascii85.py @@ -0,0 +1,72 @@ +""" Python implementation of ASCII85/ASCIIHex decoder (Adobe version). + +This code is in the public domain. + +""" + +import re +import struct + + +# ascii85decode(data) +def ascii85decode(data: bytes) -> bytes: + """ + In ASCII85 encoding, every four bytes are encoded with five ASCII + letters, using 85 different types of characters (as 256**4 < 85**5). + When the length of the original bytes is not a multiple of 4, a special + rule is used for round up. + + The Adobe's ASCII85 implementation is slightly different from + its original in handling the last characters. + + """ + n = b = 0 + out = b"" + for i in iter(data): + c = bytes((i,)) + if b"!" <= c and c <= b"u": + n += 1 + b = b * 85 + (ord(c) - 33) + if n == 5: + out += struct.pack(">L", b) + n = b = 0 + elif c == b"z": + assert n == 0, str(n) + out += b"\0\0\0\0" + elif c == b"~": + if n: + for _ in range(5 - n): + b = b * 85 + 84 + out += struct.pack(">L", b)[: n - 1] + break + return out + + +# asciihexdecode(data) +hex_re = re.compile(rb"([a-f\d]{2})", re.IGNORECASE) +trail_re = re.compile(rb"^(?:[a-f\d]{2}|\s)*([a-f\d])[\s>]*$", re.IGNORECASE) + + +def asciihexdecode(data: bytes) -> bytes: + """ + ASCIIHexDecode filter: PDFReference v1.4 section 3.3.1 + For each pair of ASCII hexadecimal digits (0-9 and A-F or a-f), the + ASCIIHexDecode filter produces one byte of binary data. All white-space + characters are ignored. A right angle bracket character (>) indicates + EOD. Any other characters will cause an error. If the filter encounters + the EOD marker after reading an odd number of hexadecimal digits, it + will behave as if a 0 followed the last digit. + """ + + def decode(x: bytes) -> bytes: + i = int(x, 16) + return bytes((i,)) + + out = b"" + for x in hex_re.findall(data): + out += decode(x) + + m = trail_re.search(data) + if m: + out += decode(m.group(1) + b"0") + return out diff --git a/templates/skills/file_manager/dependencies/pdfminer/ccitt.py b/templates/skills/file_manager/dependencies/pdfminer/ccitt.py new file mode 100644 index 00000000..b51ef2bb --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/ccitt.py @@ -0,0 +1,634 @@ +# CCITT Fax decoder +# +# Bugs: uncompressed mode untested. +# +# cf. +# ITU-T Recommendation T.4 +# "Standardization of Group 3 facsimile terminals +# for document transmission" +# ITU-T Recommendation T.6 +# "FACSIMILE CODING SCHEMES AND CODING CONTROL FUNCTIONS +# FOR GROUP 4 FACSIMILE APPARATUS" + + +import array +from typing import ( + Any, + Callable, + Dict, + Iterator, + List, + MutableSequence, + Optional, + Sequence, + Union, + cast, +) + +from .pdfexceptions import PDFException, PDFValueError + + +def get_bytes(data: bytes) -> Iterator[int]: + yield from data + + +# Workaround https://github.com/python/mypy/issues/731 +BitParserState = MutableSequence[Any] +# A better definition (not supported by mypy) would be: +# BitParserState = MutableSequence[Union["BitParserState", int, str, None]] + + +class BitParser: + _state: BitParserState + + # _accept is declared Optional solely as a workaround for + # https://github.com/python/mypy/issues/708 + _accept: Optional[Callable[[Any], BitParserState]] + + def __init__(self) -> None: + self._pos = 0 + + @classmethod + def add(cls, root: BitParserState, v: Union[int, str], bits: str) -> None: + p: BitParserState = root + b = None + for i in range(len(bits)): + if 0 < i: + assert b is not None + if p[b] is None: + p[b] = [None, None] + p = p[b] + if bits[i] == "1": + b = 1 + else: + b = 0 + assert b is not None + p[b] = v + + def feedbytes(self, data: bytes) -> None: + for byte in get_bytes(data): + for m in (128, 64, 32, 16, 8, 4, 2, 1): + self._parse_bit(byte & m) + + def _parse_bit(self, x: object) -> None: + if x: + v = self._state[1] + else: + v = self._state[0] + self._pos += 1 + if isinstance(v, list): + self._state = v + else: + assert self._accept is not None + self._state = self._accept(v) + + +class CCITTG4Parser(BitParser): + + MODE = [None, None] + BitParser.add(MODE, 0, "1") + BitParser.add(MODE, +1, "011") + BitParser.add(MODE, -1, "010") + BitParser.add(MODE, "h", "001") + BitParser.add(MODE, "p", "0001") + BitParser.add(MODE, +2, "000011") + BitParser.add(MODE, -2, "000010") + BitParser.add(MODE, +3, "0000011") + BitParser.add(MODE, -3, "0000010") + BitParser.add(MODE, "u", "0000001111") + BitParser.add(MODE, "x1", "0000001000") + BitParser.add(MODE, "x2", "0000001001") + BitParser.add(MODE, "x3", "0000001010") + BitParser.add(MODE, "x4", "0000001011") + BitParser.add(MODE, "x5", "0000001100") + BitParser.add(MODE, "x6", "0000001101") + BitParser.add(MODE, "x7", "0000001110") + BitParser.add(MODE, "e", "000000000001000000000001") + + WHITE = [None, None] + BitParser.add(WHITE, 0, "00110101") + BitParser.add(WHITE, 1, "000111") + BitParser.add(WHITE, 2, "0111") + BitParser.add(WHITE, 3, "1000") + BitParser.add(WHITE, 4, "1011") + BitParser.add(WHITE, 5, "1100") + BitParser.add(WHITE, 6, "1110") + BitParser.add(WHITE, 7, "1111") + BitParser.add(WHITE, 8, "10011") + BitParser.add(WHITE, 9, "10100") + BitParser.add(WHITE, 10, "00111") + BitParser.add(WHITE, 11, "01000") + BitParser.add(WHITE, 12, "001000") + BitParser.add(WHITE, 13, "000011") + BitParser.add(WHITE, 14, "110100") + BitParser.add(WHITE, 15, "110101") + BitParser.add(WHITE, 16, "101010") + BitParser.add(WHITE, 17, "101011") + BitParser.add(WHITE, 18, "0100111") + BitParser.add(WHITE, 19, "0001100") + BitParser.add(WHITE, 20, "0001000") + BitParser.add(WHITE, 21, "0010111") + BitParser.add(WHITE, 22, "0000011") + BitParser.add(WHITE, 23, "0000100") + BitParser.add(WHITE, 24, "0101000") + BitParser.add(WHITE, 25, "0101011") + BitParser.add(WHITE, 26, "0010011") + BitParser.add(WHITE, 27, "0100100") + BitParser.add(WHITE, 28, "0011000") + BitParser.add(WHITE, 29, "00000010") + BitParser.add(WHITE, 30, "00000011") + BitParser.add(WHITE, 31, "00011010") + BitParser.add(WHITE, 32, "00011011") + BitParser.add(WHITE, 33, "00010010") + BitParser.add(WHITE, 34, "00010011") + BitParser.add(WHITE, 35, "00010100") + BitParser.add(WHITE, 36, "00010101") + BitParser.add(WHITE, 37, "00010110") + BitParser.add(WHITE, 38, "00010111") + BitParser.add(WHITE, 39, "00101000") + BitParser.add(WHITE, 40, "00101001") + BitParser.add(WHITE, 41, "00101010") + BitParser.add(WHITE, 42, "00101011") + BitParser.add(WHITE, 43, "00101100") + BitParser.add(WHITE, 44, "00101101") + BitParser.add(WHITE, 45, "00000100") + BitParser.add(WHITE, 46, "00000101") + BitParser.add(WHITE, 47, "00001010") + BitParser.add(WHITE, 48, "00001011") + BitParser.add(WHITE, 49, "01010010") + BitParser.add(WHITE, 50, "01010011") + BitParser.add(WHITE, 51, "01010100") + BitParser.add(WHITE, 52, "01010101") + BitParser.add(WHITE, 53, "00100100") + BitParser.add(WHITE, 54, "00100101") + BitParser.add(WHITE, 55, "01011000") + BitParser.add(WHITE, 56, "01011001") + BitParser.add(WHITE, 57, "01011010") + BitParser.add(WHITE, 58, "01011011") + BitParser.add(WHITE, 59, "01001010") + BitParser.add(WHITE, 60, "01001011") + BitParser.add(WHITE, 61, "00110010") + BitParser.add(WHITE, 62, "00110011") + BitParser.add(WHITE, 63, "00110100") + BitParser.add(WHITE, 64, "11011") + BitParser.add(WHITE, 128, "10010") + BitParser.add(WHITE, 192, "010111") + BitParser.add(WHITE, 256, "0110111") + BitParser.add(WHITE, 320, "00110110") + BitParser.add(WHITE, 384, "00110111") + BitParser.add(WHITE, 448, "01100100") + BitParser.add(WHITE, 512, "01100101") + BitParser.add(WHITE, 576, "01101000") + BitParser.add(WHITE, 640, "01100111") + BitParser.add(WHITE, 704, "011001100") + BitParser.add(WHITE, 768, "011001101") + BitParser.add(WHITE, 832, "011010010") + BitParser.add(WHITE, 896, "011010011") + BitParser.add(WHITE, 960, "011010100") + BitParser.add(WHITE, 1024, "011010101") + BitParser.add(WHITE, 1088, "011010110") + BitParser.add(WHITE, 1152, "011010111") + BitParser.add(WHITE, 1216, "011011000") + BitParser.add(WHITE, 1280, "011011001") + BitParser.add(WHITE, 1344, "011011010") + BitParser.add(WHITE, 1408, "011011011") + BitParser.add(WHITE, 1472, "010011000") + BitParser.add(WHITE, 1536, "010011001") + BitParser.add(WHITE, 1600, "010011010") + BitParser.add(WHITE, 1664, "011000") + BitParser.add(WHITE, 1728, "010011011") + BitParser.add(WHITE, 1792, "00000001000") + BitParser.add(WHITE, 1856, "00000001100") + BitParser.add(WHITE, 1920, "00000001101") + BitParser.add(WHITE, 1984, "000000010010") + BitParser.add(WHITE, 2048, "000000010011") + BitParser.add(WHITE, 2112, "000000010100") + BitParser.add(WHITE, 2176, "000000010101") + BitParser.add(WHITE, 2240, "000000010110") + BitParser.add(WHITE, 2304, "000000010111") + BitParser.add(WHITE, 2368, "000000011100") + BitParser.add(WHITE, 2432, "000000011101") + BitParser.add(WHITE, 2496, "000000011110") + BitParser.add(WHITE, 2560, "000000011111") + + BLACK = [None, None] + BitParser.add(BLACK, 0, "0000110111") + BitParser.add(BLACK, 1, "010") + BitParser.add(BLACK, 2, "11") + BitParser.add(BLACK, 3, "10") + BitParser.add(BLACK, 4, "011") + BitParser.add(BLACK, 5, "0011") + BitParser.add(BLACK, 6, "0010") + BitParser.add(BLACK, 7, "00011") + BitParser.add(BLACK, 8, "000101") + BitParser.add(BLACK, 9, "000100") + BitParser.add(BLACK, 10, "0000100") + BitParser.add(BLACK, 11, "0000101") + BitParser.add(BLACK, 12, "0000111") + BitParser.add(BLACK, 13, "00000100") + BitParser.add(BLACK, 14, "00000111") + BitParser.add(BLACK, 15, "000011000") + BitParser.add(BLACK, 16, "0000010111") + BitParser.add(BLACK, 17, "0000011000") + BitParser.add(BLACK, 18, "0000001000") + BitParser.add(BLACK, 19, "00001100111") + BitParser.add(BLACK, 20, "00001101000") + BitParser.add(BLACK, 21, "00001101100") + BitParser.add(BLACK, 22, "00000110111") + BitParser.add(BLACK, 23, "00000101000") + BitParser.add(BLACK, 24, "00000010111") + BitParser.add(BLACK, 25, "00000011000") + BitParser.add(BLACK, 26, "000011001010") + BitParser.add(BLACK, 27, "000011001011") + BitParser.add(BLACK, 28, "000011001100") + BitParser.add(BLACK, 29, "000011001101") + BitParser.add(BLACK, 30, "000001101000") + BitParser.add(BLACK, 31, "000001101001") + BitParser.add(BLACK, 32, "000001101010") + BitParser.add(BLACK, 33, "000001101011") + BitParser.add(BLACK, 34, "000011010010") + BitParser.add(BLACK, 35, "000011010011") + BitParser.add(BLACK, 36, "000011010100") + BitParser.add(BLACK, 37, "000011010101") + BitParser.add(BLACK, 38, "000011010110") + BitParser.add(BLACK, 39, "000011010111") + BitParser.add(BLACK, 40, "000001101100") + BitParser.add(BLACK, 41, "000001101101") + BitParser.add(BLACK, 42, "000011011010") + BitParser.add(BLACK, 43, "000011011011") + BitParser.add(BLACK, 44, "000001010100") + BitParser.add(BLACK, 45, "000001010101") + BitParser.add(BLACK, 46, "000001010110") + BitParser.add(BLACK, 47, "000001010111") + BitParser.add(BLACK, 48, "000001100100") + BitParser.add(BLACK, 49, "000001100101") + BitParser.add(BLACK, 50, "000001010010") + BitParser.add(BLACK, 51, "000001010011") + BitParser.add(BLACK, 52, "000000100100") + BitParser.add(BLACK, 53, "000000110111") + BitParser.add(BLACK, 54, "000000111000") + BitParser.add(BLACK, 55, "000000100111") + BitParser.add(BLACK, 56, "000000101000") + BitParser.add(BLACK, 57, "000001011000") + BitParser.add(BLACK, 58, "000001011001") + BitParser.add(BLACK, 59, "000000101011") + BitParser.add(BLACK, 60, "000000101100") + BitParser.add(BLACK, 61, "000001011010") + BitParser.add(BLACK, 62, "000001100110") + BitParser.add(BLACK, 63, "000001100111") + BitParser.add(BLACK, 64, "0000001111") + BitParser.add(BLACK, 128, "000011001000") + BitParser.add(BLACK, 192, "000011001001") + BitParser.add(BLACK, 256, "000001011011") + BitParser.add(BLACK, 320, "000000110011") + BitParser.add(BLACK, 384, "000000110100") + BitParser.add(BLACK, 448, "000000110101") + BitParser.add(BLACK, 512, "0000001101100") + BitParser.add(BLACK, 576, "0000001101101") + BitParser.add(BLACK, 640, "0000001001010") + BitParser.add(BLACK, 704, "0000001001011") + BitParser.add(BLACK, 768, "0000001001100") + BitParser.add(BLACK, 832, "0000001001101") + BitParser.add(BLACK, 896, "0000001110010") + BitParser.add(BLACK, 960, "0000001110011") + BitParser.add(BLACK, 1024, "0000001110100") + BitParser.add(BLACK, 1088, "0000001110101") + BitParser.add(BLACK, 1152, "0000001110110") + BitParser.add(BLACK, 1216, "0000001110111") + BitParser.add(BLACK, 1280, "0000001010010") + BitParser.add(BLACK, 1344, "0000001010011") + BitParser.add(BLACK, 1408, "0000001010100") + BitParser.add(BLACK, 1472, "0000001010101") + BitParser.add(BLACK, 1536, "0000001011010") + BitParser.add(BLACK, 1600, "0000001011011") + BitParser.add(BLACK, 1664, "0000001100100") + BitParser.add(BLACK, 1728, "0000001100101") + BitParser.add(BLACK, 1792, "00000001000") + BitParser.add(BLACK, 1856, "00000001100") + BitParser.add(BLACK, 1920, "00000001101") + BitParser.add(BLACK, 1984, "000000010010") + BitParser.add(BLACK, 2048, "000000010011") + BitParser.add(BLACK, 2112, "000000010100") + BitParser.add(BLACK, 2176, "000000010101") + BitParser.add(BLACK, 2240, "000000010110") + BitParser.add(BLACK, 2304, "000000010111") + BitParser.add(BLACK, 2368, "000000011100") + BitParser.add(BLACK, 2432, "000000011101") + BitParser.add(BLACK, 2496, "000000011110") + BitParser.add(BLACK, 2560, "000000011111") + + UNCOMPRESSED = [None, None] + BitParser.add(UNCOMPRESSED, "1", "1") + BitParser.add(UNCOMPRESSED, "01", "01") + BitParser.add(UNCOMPRESSED, "001", "001") + BitParser.add(UNCOMPRESSED, "0001", "0001") + BitParser.add(UNCOMPRESSED, "00001", "00001") + BitParser.add(UNCOMPRESSED, "00000", "000001") + BitParser.add(UNCOMPRESSED, "T00", "00000011") + BitParser.add(UNCOMPRESSED, "T10", "00000010") + BitParser.add(UNCOMPRESSED, "T000", "000000011") + BitParser.add(UNCOMPRESSED, "T100", "000000010") + BitParser.add(UNCOMPRESSED, "T0000", "0000000011") + BitParser.add(UNCOMPRESSED, "T1000", "0000000010") + BitParser.add(UNCOMPRESSED, "T00000", "00000000011") + BitParser.add(UNCOMPRESSED, "T10000", "00000000010") + + class CCITTException(PDFException): + pass + + class EOFB(CCITTException): + pass + + class InvalidData(CCITTException): + pass + + class ByteSkip(CCITTException): + pass + + _color: int + + def __init__(self, width: int, bytealign: bool = False) -> None: + BitParser.__init__(self) + self.width = width + self.bytealign = bytealign + self.reset() + return + + def feedbytes(self, data: bytes) -> None: + for byte in get_bytes(data): + try: + for m in (128, 64, 32, 16, 8, 4, 2, 1): + self._parse_bit(byte & m) + except self.ByteSkip: + self._accept = self._parse_mode + self._state = self.MODE + except self.EOFB: + break + return + + def _parse_mode(self, mode: object) -> BitParserState: + if mode == "p": + self._do_pass() + self._flush_line() + return self.MODE + elif mode == "h": + self._n1 = 0 + self._accept = self._parse_horiz1 + if self._color: + return self.WHITE + else: + return self.BLACK + elif mode == "u": + self._accept = self._parse_uncompressed + return self.UNCOMPRESSED + elif mode == "e": + raise self.EOFB + elif isinstance(mode, int): + self._do_vertical(mode) + self._flush_line() + return self.MODE + else: + raise self.InvalidData(mode) + + def _parse_horiz1(self, n: Any) -> BitParserState: + if n is None: + raise self.InvalidData + self._n1 += n + if n < 64: + self._n2 = 0 + self._color = 1 - self._color + self._accept = self._parse_horiz2 + if self._color: + return self.WHITE + else: + return self.BLACK + + def _parse_horiz2(self, n: Any) -> BitParserState: + if n is None: + raise self.InvalidData + self._n2 += n + if n < 64: + self._color = 1 - self._color + self._accept = self._parse_mode + self._do_horizontal(self._n1, self._n2) + self._flush_line() + return self.MODE + elif self._color: + return self.WHITE + else: + return self.BLACK + + def _parse_uncompressed(self, bits: Optional[str]) -> BitParserState: + if not bits: + raise self.InvalidData + if bits.startswith("T"): + self._accept = self._parse_mode + self._color = int(bits[1]) + self._do_uncompressed(bits[2:]) + return self.MODE + else: + self._do_uncompressed(bits) + return self.UNCOMPRESSED + + def _get_bits(self) -> str: + return "".join(str(b) for b in self._curline[: self._curpos]) + + def _get_refline(self, i: int) -> str: + if i < 0: + return "[]" + "".join(str(b) for b in self._refline) + elif len(self._refline) <= i: + return "".join(str(b) for b in self._refline) + "[]" + else: + return ( + "".join(str(b) for b in self._refline[:i]) + + "[" + + str(self._refline[i]) + + "]" + + "".join(str(b) for b in self._refline[i + 1 :]) + ) + + def reset(self) -> None: + self._y = 0 + self._curline = array.array("b", [1] * self.width) + self._reset_line() + self._accept = self._parse_mode + self._state = self.MODE + return + + def output_line(self, y: int, bits: Sequence[int]) -> None: + print(y, "".join(str(b) for b in bits)) + return + + def _reset_line(self) -> None: + self._refline = self._curline + self._curline = array.array("b", [1] * self.width) + self._curpos = -1 + self._color = 1 + return + + def _flush_line(self) -> None: + if self.width <= self._curpos: + self.output_line(self._y, self._curline) + self._y += 1 + self._reset_line() + if self.bytealign: + raise self.ByteSkip + return + + def _do_vertical(self, dx: int) -> None: + x1 = self._curpos + 1 + while 1: + if x1 == 0: + if self._color == 1 and self._refline[x1] != self._color: + break + elif x1 == len(self._refline): + break + elif ( + self._refline[x1 - 1] == self._color + and self._refline[x1] != self._color + ): + break + x1 += 1 + x1 += dx + x0 = max(0, self._curpos) + x1 = max(0, min(self.width, x1)) + if x1 < x0: + for x in range(x1, x0): + self._curline[x] = self._color + elif x0 < x1: + for x in range(x0, x1): + self._curline[x] = self._color + self._curpos = x1 + self._color = 1 - self._color + return + + def _do_pass(self) -> None: + x1 = self._curpos + 1 + while 1: + if x1 == 0: + if self._color == 1 and self._refline[x1] != self._color: + break + elif x1 == len(self._refline): + break + elif ( + self._refline[x1 - 1] == self._color + and self._refline[x1] != self._color + ): + break + x1 += 1 + while 1: + if x1 == 0: + if self._color == 0 and self._refline[x1] == self._color: + break + elif x1 == len(self._refline): + break + elif ( + self._refline[x1 - 1] != self._color + and self._refline[x1] == self._color + ): + break + x1 += 1 + for x in range(self._curpos, x1): + self._curline[x] = self._color + self._curpos = x1 + return + + def _do_horizontal(self, n1: int, n2: int) -> None: + if self._curpos < 0: + self._curpos = 0 + x = self._curpos + for _ in range(n1): + if len(self._curline) <= x: + break + self._curline[x] = self._color + x += 1 + for _ in range(n2): + if len(self._curline) <= x: + break + self._curline[x] = 1 - self._color + x += 1 + self._curpos = x + return + + def _do_uncompressed(self, bits: str) -> None: + for c in bits: + self._curline[self._curpos] = int(c) + self._curpos += 1 + self._flush_line() + return + + +class CCITTFaxDecoder(CCITTG4Parser): + def __init__( + self, width: int, bytealign: bool = False, reversed: bool = False + ) -> None: + CCITTG4Parser.__init__(self, width, bytealign=bytealign) + self.reversed = reversed + self._buf = b"" + return + + def close(self) -> bytes: + return self._buf + + def output_line(self, y: int, bits: Sequence[int]) -> None: + arr = array.array("B", [0] * ((len(bits) + 7) // 8)) + if self.reversed: + bits = [1 - b for b in bits] + for (i, b) in enumerate(bits): + if b: + arr[i // 8] += (128, 64, 32, 16, 8, 4, 2, 1)[i % 8] + self._buf += arr.tobytes() + return + + +def ccittfaxdecode(data: bytes, params: Dict[str, object]) -> bytes: + K = params.get("K") + if K == -1: + cols = cast(int, params.get("Columns")) + bytealign = cast(bool, params.get("EncodedByteAlign")) + reversed = cast(bool, params.get("BlackIs1")) + parser = CCITTFaxDecoder(cols, bytealign=bytealign, reversed=reversed) + else: + raise PDFValueError(K) + parser.feedbytes(data) + return parser.close() + + +# test +def main(argv: List[str]) -> None: + if not argv[1:]: + import unittest + + unittest.main() + return + + class Parser(CCITTG4Parser): + def __init__(self, width: int, bytealign: bool = False) -> None: + import pygame # type: ignore[import] + + CCITTG4Parser.__init__(self, width, bytealign=bytealign) + self.img = pygame.Surface((self.width, 1000)) + return + + def output_line(self, y: int, bits: Sequence[int]) -> None: + for (x, b) in enumerate(bits): + if b: + self.img.set_at((x, y), (255, 255, 255)) + else: + self.img.set_at((x, y), (0, 0, 0)) + return + + def close(self) -> None: + import pygame + + pygame.image.save(self.img, "out.bmp") + return + + for path in argv[1:]: + fp = open(path, "rb") + (_, _, k, w, h, _) = path.split(".") + parser = Parser(int(w)) + parser.feedbytes(fp.read()) + parser.close() + fp.close() + return diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/78-EUC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-EUC-H.pickle.gz new file mode 100644 index 00000000..c6f412cf Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-EUC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/78-EUC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-EUC-V.pickle.gz new file mode 100644 index 00000000..06e39f08 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-EUC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/78-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-H.pickle.gz new file mode 100644 index 00000000..807b1a34 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/78-RKSJ-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-RKSJ-H.pickle.gz new file mode 100644 index 00000000..66aee98b Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-RKSJ-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/78-RKSJ-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-RKSJ-V.pickle.gz new file mode 100644 index 00000000..c5c73e5d Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-RKSJ-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/78-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-V.pickle.gz new file mode 100644 index 00000000..37ae2cc5 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/78-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/78ms-RKSJ-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/78ms-RKSJ-H.pickle.gz new file mode 100644 index 00000000..73971fb7 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/78ms-RKSJ-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/78ms-RKSJ-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/78ms-RKSJ-V.pickle.gz new file mode 100644 index 00000000..cccb224c Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/78ms-RKSJ-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/83pv-RKSJ-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/83pv-RKSJ-H.pickle.gz new file mode 100644 index 00000000..3ffa70e4 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/83pv-RKSJ-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/83pv-RKSJ-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/83pv-RKSJ-V.pickle.gz new file mode 100644 index 00000000..9d8a54dd Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/83pv-RKSJ-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/90ms-RKSJ-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/90ms-RKSJ-H.pickle.gz new file mode 100644 index 00000000..c7e2b3fe Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/90ms-RKSJ-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/90ms-RKSJ-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/90ms-RKSJ-V.pickle.gz new file mode 100644 index 00000000..0240b623 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/90ms-RKSJ-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/90msp-RKSJ-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/90msp-RKSJ-H.pickle.gz new file mode 100644 index 00000000..ebfd8164 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/90msp-RKSJ-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/90msp-RKSJ-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/90msp-RKSJ-V.pickle.gz new file mode 100644 index 00000000..157e97f9 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/90msp-RKSJ-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/90pv-RKSJ-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/90pv-RKSJ-H.pickle.gz new file mode 100644 index 00000000..be960c17 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/90pv-RKSJ-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/90pv-RKSJ-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/90pv-RKSJ-V.pickle.gz new file mode 100644 index 00000000..58843757 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/90pv-RKSJ-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-H.pickle.gz new file mode 100644 index 00000000..503fb8be Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-RKSJ-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-RKSJ-H.pickle.gz new file mode 100644 index 00000000..3275daff Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-RKSJ-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-RKSJ-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-RKSJ-V.pickle.gz new file mode 100644 index 00000000..e670c24e Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-RKSJ-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-V.pickle.gz new file mode 100644 index 00000000..a7c71c5a Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Add-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/B5-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/B5-H.pickle.gz new file mode 100644 index 00000000..8ff5f632 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/B5-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/B5-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/B5-V.pickle.gz new file mode 100644 index 00000000..6b2c863a Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/B5-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/B5pc-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/B5pc-H.pickle.gz new file mode 100644 index 00000000..4353af9d Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/B5pc-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/B5pc-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/B5pc-V.pickle.gz new file mode 100644 index 00000000..544f454e Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/B5pc-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS-EUC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS-EUC-H.pickle.gz new file mode 100644 index 00000000..19a7e6b6 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS-EUC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS-EUC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS-EUC-V.pickle.gz new file mode 100644 index 00000000..7f50ee95 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS-EUC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS1-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS1-H.pickle.gz new file mode 100644 index 00000000..72d301a9 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS1-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS1-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS1-V.pickle.gz new file mode 100644 index 00000000..7cdbe4a0 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS1-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS2-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS2-H.pickle.gz new file mode 100644 index 00000000..e8ae6d3d Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS2-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS2-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS2-V.pickle.gz new file mode 100644 index 00000000..c9cc1d77 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/CNS2-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/ETHK-B5-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETHK-B5-H.pickle.gz new file mode 100644 index 00000000..6ac5a7c1 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETHK-B5-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/ETHK-B5-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETHK-B5-V.pickle.gz new file mode 100644 index 00000000..f9965c1f Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETHK-B5-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/ETen-B5-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETen-B5-H.pickle.gz new file mode 100644 index 00000000..3a26c52b Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETen-B5-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/ETen-B5-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETen-B5-V.pickle.gz new file mode 100644 index 00000000..3d645d22 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETen-B5-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/ETenms-B5-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETenms-B5-H.pickle.gz new file mode 100644 index 00000000..c2be623c Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETenms-B5-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/ETenms-B5-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETenms-B5-V.pickle.gz new file mode 100644 index 00000000..860d1991 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/ETenms-B5-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/EUC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/EUC-H.pickle.gz new file mode 100644 index 00000000..d62b96c5 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/EUC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/EUC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/EUC-V.pickle.gz new file mode 100644 index 00000000..2a0ad994 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/EUC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-H.pickle.gz new file mode 100644 index 00000000..64a9d5d0 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-RKSJ-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-RKSJ-H.pickle.gz new file mode 100644 index 00000000..1946dc60 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-RKSJ-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-RKSJ-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-RKSJ-V.pickle.gz new file mode 100644 index 00000000..dfa4770e Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-RKSJ-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-V.pickle.gz new file mode 100644 index 00000000..67e4a2f0 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Ext-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-EUC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-EUC-H.pickle.gz new file mode 100644 index 00000000..2cf692a8 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-EUC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-EUC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-EUC-V.pickle.gz new file mode 100644 index 00000000..09c23184 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-EUC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-H.pickle.gz new file mode 100644 index 00000000..9bb7d65a Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-V.pickle.gz new file mode 100644 index 00000000..000914cb Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GB-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK-EUC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK-EUC-H.pickle.gz new file mode 100644 index 00000000..c3cc563d Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK-EUC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK-EUC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK-EUC-V.pickle.gz new file mode 100644 index 00000000..eec4febc Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK-EUC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK2K-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK2K-H.pickle.gz new file mode 100644 index 00000000..742063ba Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK2K-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK2K-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK2K-V.pickle.gz new file mode 100644 index 00000000..cf933882 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBK2K-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBKp-EUC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBKp-EUC-H.pickle.gz new file mode 100644 index 00000000..f79dad86 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBKp-EUC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBKp-EUC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBKp-EUC-V.pickle.gz new file mode 100644 index 00000000..76d148ae Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBKp-EUC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-EUC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-EUC-H.pickle.gz new file mode 100644 index 00000000..5a1c7036 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-EUC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-EUC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-EUC-V.pickle.gz new file mode 100644 index 00000000..32323415 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-EUC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-H.pickle.gz new file mode 100644 index 00000000..50b9e1f6 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-V.pickle.gz new file mode 100644 index 00000000..c2df2156 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBT-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBTpc-EUC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBTpc-EUC-H.pickle.gz new file mode 100644 index 00000000..71f38021 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBTpc-EUC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBTpc-EUC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBTpc-EUC-V.pickle.gz new file mode 100644 index 00000000..a801e738 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBTpc-EUC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBpc-EUC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBpc-EUC-H.pickle.gz new file mode 100644 index 00000000..0b940d16 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBpc-EUC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/GBpc-EUC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBpc-EUC-V.pickle.gz new file mode 100644 index 00000000..3fd4a32a Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/GBpc-EUC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/H.pickle.gz new file mode 100644 index 00000000..43fa9f2f Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdla-B5-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdla-B5-H.pickle.gz new file mode 100644 index 00000000..10286104 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdla-B5-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdla-B5-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdla-B5-V.pickle.gz new file mode 100644 index 00000000..d9556e39 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdla-B5-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdlb-B5-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdlb-B5-H.pickle.gz new file mode 100644 index 00000000..0594a74e Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdlb-B5-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdlb-B5-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdlb-B5-V.pickle.gz new file mode 100644 index 00000000..d405c7b9 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKdlb-B5-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKgccs-B5-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKgccs-B5-H.pickle.gz new file mode 100644 index 00000000..90b0b683 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKgccs-B5-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKgccs-B5-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKgccs-B5-V.pickle.gz new file mode 100644 index 00000000..91f08138 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKgccs-B5-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm314-B5-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm314-B5-H.pickle.gz new file mode 100644 index 00000000..8c17b605 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm314-B5-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm314-B5-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm314-B5-V.pickle.gz new file mode 100644 index 00000000..7dc58afc Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm314-B5-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm471-B5-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm471-B5-H.pickle.gz new file mode 100644 index 00000000..0f1c9e99 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm471-B5-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm471-B5-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm471-B5-V.pickle.gz new file mode 100644 index 00000000..b64bb1de Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKm471-B5-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKscs-B5-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKscs-B5-H.pickle.gz new file mode 100644 index 00000000..9c918ace Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKscs-B5-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/HKscs-B5-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKscs-B5-V.pickle.gz new file mode 100644 index 00000000..85b6d026 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/HKscs-B5-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Hankaku-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Hankaku-H.pickle.gz new file mode 100644 index 00000000..cad83d32 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Hankaku-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Hankaku-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Hankaku-V.pickle.gz new file mode 100644 index 00000000..ce3ea517 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Hankaku-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Hiragana-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Hiragana-H.pickle.gz new file mode 100644 index 00000000..11388035 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Hiragana-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Hiragana-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Hiragana-V.pickle.gz new file mode 100644 index 00000000..82b094be Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Hiragana-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-EUC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-EUC-H.pickle.gz new file mode 100644 index 00000000..d2515e66 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-EUC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-EUC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-EUC-V.pickle.gz new file mode 100644 index 00000000..b4c0ebe3 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-EUC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-H.pickle.gz new file mode 100644 index 00000000..c2feb14d Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-Johab-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-Johab-H.pickle.gz new file mode 100644 index 00000000..5051f444 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-Johab-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-Johab-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-Johab-V.pickle.gz new file mode 100644 index 00000000..e1436d89 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-Johab-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-V.pickle.gz new file mode 100644 index 00000000..e6cf409c Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-H.pickle.gz new file mode 100644 index 00000000..8dd53848 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-HW-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-HW-H.pickle.gz new file mode 100644 index 00000000..9aa322c9 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-HW-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-HW-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-HW-V.pickle.gz new file mode 100644 index 00000000..9768d18f Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-HW-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-V.pickle.gz new file mode 100644 index 00000000..450f983d Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCms-UHC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCpc-EUC-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCpc-EUC-H.pickle.gz new file mode 100644 index 00000000..576315aa Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCpc-EUC-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCpc-EUC-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCpc-EUC-V.pickle.gz new file mode 100644 index 00000000..0a0f7b47 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/KSCpc-EUC-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Katakana-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Katakana-H.pickle.gz new file mode 100644 index 00000000..270f0faf Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Katakana-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Katakana-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Katakana-V.pickle.gz new file mode 100644 index 00000000..236ae3b2 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Katakana-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/NWP-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/NWP-H.pickle.gz new file mode 100644 index 00000000..b8ec7cce Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/NWP-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/NWP-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/NWP-V.pickle.gz new file mode 100644 index 00000000..a39a5bff Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/NWP-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/RKSJ-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/RKSJ-H.pickle.gz new file mode 100644 index 00000000..744e530f Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/RKSJ-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/RKSJ-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/RKSJ-V.pickle.gz new file mode 100644 index 00000000..8e90a9f6 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/RKSJ-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Roman-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Roman-H.pickle.gz new file mode 100644 index 00000000..74d955b1 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Roman-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/Roman-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/Roman-V.pickle.gz new file mode 100644 index 00000000..ba034bc3 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/Roman-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UCS2-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UCS2-H.pickle.gz new file mode 100644 index 00000000..6b31ee01 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UCS2-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UCS2-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UCS2-V.pickle.gz new file mode 100644 index 00000000..304d94c1 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UCS2-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF16-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF16-H.pickle.gz new file mode 100644 index 00000000..847f4dee Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF16-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF16-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF16-V.pickle.gz new file mode 100644 index 00000000..bea6ef08 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF16-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF32-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF32-H.pickle.gz new file mode 100644 index 00000000..ff115698 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF32-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF32-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF32-V.pickle.gz new file mode 100644 index 00000000..b3ce4360 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF32-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF8-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF8-H.pickle.gz new file mode 100644 index 00000000..edab5166 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF8-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF8-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF8-V.pickle.gz new file mode 100644 index 00000000..9e8a2e64 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniCNS-UTF8-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UCS2-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UCS2-H.pickle.gz new file mode 100644 index 00000000..640bf7ec Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UCS2-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UCS2-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UCS2-V.pickle.gz new file mode 100644 index 00000000..0ed0331d Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UCS2-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF16-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF16-H.pickle.gz new file mode 100644 index 00000000..f8f71f86 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF16-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF16-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF16-V.pickle.gz new file mode 100644 index 00000000..6828864c Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF16-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF32-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF32-H.pickle.gz new file mode 100644 index 00000000..842d286b Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF32-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF32-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF32-V.pickle.gz new file mode 100644 index 00000000..d6f410a9 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF32-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF8-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF8-H.pickle.gz new file mode 100644 index 00000000..19eac074 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF8-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF8-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF8-V.pickle.gz new file mode 100644 index 00000000..911b0957 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniGB-UTF8-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-H.pickle.gz new file mode 100644 index 00000000..767cb037 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-HW-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-HW-H.pickle.gz new file mode 100644 index 00000000..3d8d5319 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-HW-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-HW-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-HW-V.pickle.gz new file mode 100644 index 00000000..1850dee9 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-HW-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-V.pickle.gz new file mode 100644 index 00000000..1ed3c806 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UCS2-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF16-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF16-H.pickle.gz new file mode 100644 index 00000000..b8b65591 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF16-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF16-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF16-V.pickle.gz new file mode 100644 index 00000000..5159409a Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF16-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF32-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF32-H.pickle.gz new file mode 100644 index 00000000..cba91aa2 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF32-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF32-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF32-V.pickle.gz new file mode 100644 index 00000000..b35a7150 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF32-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF8-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF8-H.pickle.gz new file mode 100644 index 00000000..a209232f Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF8-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF8-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF8-V.pickle.gz new file mode 100644 index 00000000..24752ad6 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS-UTF8-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF16-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF16-H.pickle.gz new file mode 100644 index 00000000..ef4d1f25 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF16-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF16-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF16-V.pickle.gz new file mode 100644 index 00000000..66731b56 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF16-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF32-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF32-H.pickle.gz new file mode 100644 index 00000000..c6c9e9ab Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF32-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF32-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF32-V.pickle.gz new file mode 100644 index 00000000..5f8de3c8 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF32-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF8-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF8-H.pickle.gz new file mode 100644 index 00000000..b4c7813a Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF8-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF8-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF8-V.pickle.gz new file mode 100644 index 00000000..fb29f82e Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJIS2004-UTF8-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX0213-UTF32-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX0213-UTF32-H.pickle.gz new file mode 100644 index 00000000..68788446 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX0213-UTF32-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX0213-UTF32-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX0213-UTF32-V.pickle.gz new file mode 100644 index 00000000..3fb944df Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX0213-UTF32-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX02132004-UTF32-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX02132004-UTF32-H.pickle.gz new file mode 100644 index 00000000..27f596a0 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX02132004-UTF32-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX02132004-UTF32-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX02132004-UTF32-V.pickle.gz new file mode 100644 index 00000000..adf1a125 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniJISX02132004-UTF32-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UCS2-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UCS2-H.pickle.gz new file mode 100644 index 00000000..65873fe1 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UCS2-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UCS2-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UCS2-V.pickle.gz new file mode 100644 index 00000000..9c8eca08 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UCS2-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF16-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF16-H.pickle.gz new file mode 100644 index 00000000..4d528fd0 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF16-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF16-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF16-V.pickle.gz new file mode 100644 index 00000000..2649bf95 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF16-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF32-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF32-H.pickle.gz new file mode 100644 index 00000000..cc875813 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF32-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF32-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF32-V.pickle.gz new file mode 100644 index 00000000..5ec9064f Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF32-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF8-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF8-H.pickle.gz new file mode 100644 index 00000000..d02a6088 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF8-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF8-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF8-V.pickle.gz new file mode 100644 index 00000000..0048952a Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/UniKS-UTF8-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/V.pickle.gz new file mode 100644 index 00000000..c0c0a985 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/WP-Symbol-H.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/WP-Symbol-H.pickle.gz new file mode 100644 index 00000000..cac95506 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/WP-Symbol-H.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/WP-Symbol-V.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/WP-Symbol-V.pickle.gz new file mode 100644 index 00000000..4d222e3c Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/WP-Symbol-V.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-CNS1.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-CNS1.pickle.gz new file mode 100644 index 00000000..8de6794a Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-CNS1.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-GB1.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-GB1.pickle.gz new file mode 100644 index 00000000..120cac28 Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-GB1.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-Japan1.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-Japan1.pickle.gz new file mode 100644 index 00000000..9eeb637d Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-Japan1.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-Korea1.pickle.gz b/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-Korea1.pickle.gz new file mode 100644 index 00000000..9ac933ee Binary files /dev/null and b/templates/skills/file_manager/dependencies/pdfminer/cmap/to-unicode-Adobe-Korea1.pickle.gz differ diff --git a/templates/skills/file_manager/dependencies/pdfminer/cmapdb.py b/templates/skills/file_manager/dependencies/pdfminer/cmapdb.py new file mode 100644 index 00000000..051c9d15 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/cmapdb.py @@ -0,0 +1,505 @@ +""" Adobe character mapping (CMap) support. + +CMaps provide the mapping between character codes and Unicode +code-points to character ids (CIDs). + +More information is available on: + + https://github.com/adobe-type-tools/cmap-resources + +""" + +import gzip +import logging +import os +import os.path +import pickle as pickle +import struct +import sys +from typing import ( + Any, + BinaryIO, + Dict, + Iterable, + Iterator, + List, + MutableMapping, + Optional, + TextIO, + Tuple, + Union, + cast, + Set, +) + +from pdfminer.pdfexceptions import PDFException, PDFTypeError +from .encodingdb import name2unicode +from .psparser import KWD +from pdfminer.psexceptions import PSEOF, PSSyntaxError +from .psparser import PSKeyword +from .psparser import PSLiteral +from .psparser import PSStackParser +from .psparser import literal_name +from .utils import choplist +from .utils import nunpack + +log = logging.getLogger(__name__) + + +class CMapError(PDFException): + pass + + +class CMapBase: + + debug = 0 + + def __init__(self, **kwargs: object) -> None: + self.attrs: MutableMapping[str, object] = kwargs.copy() + + def is_vertical(self) -> bool: + return self.attrs.get("WMode", 0) != 0 + + def set_attr(self, k: str, v: object) -> None: + self.attrs[k] = v + + def add_code2cid(self, code: str, cid: int) -> None: + pass + + def add_cid2unichr(self, cid: int, code: Union[PSLiteral, bytes, int]) -> None: + pass + + def use_cmap(self, cmap: "CMapBase") -> None: + pass + + def decode(self, code: bytes) -> Iterable[int]: + raise NotImplementedError + + +class CMap(CMapBase): + def __init__(self, **kwargs: Union[str, int]) -> None: + CMapBase.__init__(self, **kwargs) + self.code2cid: Dict[int, object] = {} + + def __repr__(self) -> str: + return "" % self.attrs.get("CMapName") + + def use_cmap(self, cmap: CMapBase) -> None: + assert isinstance(cmap, CMap), str(type(cmap)) + + def copy(dst: Dict[int, object], src: Dict[int, object]) -> None: + for (k, v) in src.items(): + if isinstance(v, dict): + d: Dict[int, object] = {} + dst[k] = d + copy(d, v) + else: + dst[k] = v + + copy(self.code2cid, cmap.code2cid) + + def decode(self, code: bytes) -> Iterator[int]: + log.debug("decode: %r, %r", self, code) + d = self.code2cid + for i in iter(code): + if i in d: + x = d[i] + if isinstance(x, int): + yield x + d = self.code2cid + else: + d = cast(Dict[int, object], x) + else: + d = self.code2cid + + def dump( + self, + out: TextIO = sys.stdout, + code2cid: Optional[Dict[int, object]] = None, + code: Tuple[int, ...] = (), + ) -> None: + if code2cid is None: + code2cid = self.code2cid + code = () + for (k, v) in sorted(code2cid.items()): + c = code + (k,) + if isinstance(v, int): + out.write("code %r = cid %d\n" % (c, v)) + else: + self.dump(out=out, code2cid=cast(Dict[int, object], v), code=c) + + +class IdentityCMap(CMapBase): + def decode(self, code: bytes) -> Tuple[int, ...]: + n = len(code) // 2 + if n: + return struct.unpack(">%dH" % n, code) + else: + return () + + +class IdentityCMapByte(IdentityCMap): + def decode(self, code: bytes) -> Tuple[int, ...]: + n = len(code) + if n: + return struct.unpack(">%dB" % n, code) + else: + return () + + +class UnicodeMap(CMapBase): + def __init__(self, **kwargs: Union[str, int]) -> None: + CMapBase.__init__(self, **kwargs) + self.cid2unichr: Dict[int, str] = {} + + def __repr__(self) -> str: + return "" % self.attrs.get("CMapName") + + def get_unichr(self, cid: int) -> str: + log.debug("get_unichr: %r, %r", self, cid) + return self.cid2unichr[cid] + + def dump(self, out: TextIO = sys.stdout) -> None: + for (k, v) in sorted(self.cid2unichr.items()): + out.write("cid %d = unicode %r\n" % (k, v)) + + +class IdentityUnicodeMap(UnicodeMap): + def get_unichr(self, cid: int) -> str: + """Interpret character id as unicode codepoint""" + log.debug("get_unichr: %r, %r", self, cid) + return chr(cid) + + +class FileCMap(CMap): + def add_code2cid(self, code: str, cid: int) -> None: + assert isinstance(code, str) and isinstance(cid, int), str( + (type(code), type(cid)) + ) + d = self.code2cid + for c in code[:-1]: + ci = ord(c) + if ci in d: + d = cast(Dict[int, object], d[ci]) + else: + t: Dict[int, object] = {} + d[ci] = t + d = t + ci = ord(code[-1]) + d[ci] = cid + + +class FileUnicodeMap(UnicodeMap): + def add_cid2unichr(self, cid: int, code: Union[PSLiteral, bytes, int]) -> None: + assert isinstance(cid, int), str(type(cid)) + if isinstance(code, PSLiteral): + # Interpret as an Adobe glyph name. + assert isinstance(code.name, str) + unichr = name2unicode(code.name) + elif isinstance(code, bytes): + # Interpret as UTF-16BE. + unichr = code.decode("UTF-16BE", "ignore") + elif isinstance(code, int): + unichr = chr(code) + else: + raise PDFTypeError(code) + + # A0 = non-breaking space, some weird fonts can have a collision on a cid here. + if unichr == "\u00A0" and self.cid2unichr.get(cid) == " ": + return + self.cid2unichr[cid] = unichr + + +class PyCMap(CMap): + def __init__(self, name: str, module: Any) -> None: + super().__init__(CMapName=name) + self.code2cid = module.CODE2CID + if module.IS_VERTICAL: + self.attrs["WMode"] = 1 + + +class PyUnicodeMap(UnicodeMap): + def __init__(self, name: str, module: Any, vertical: bool) -> None: + super().__init__(CMapName=name) + if vertical: + self.cid2unichr = module.CID2UNICHR_V + self.attrs["WMode"] = 1 + else: + self.cid2unichr = module.CID2UNICHR_H + + +class CMapDB: + + _cmap_cache: Dict[str, PyCMap] = {} + _umap_cache: Dict[str, List[PyUnicodeMap]] = {} + + class CMapNotFound(CMapError): + pass + + @classmethod + def _load_data(cls, name: str) -> Any: + name = name.replace("\0", "") + filename = "%s.pickle.gz" % name + log.debug("loading: %r", name) + cmap_paths = ( + os.environ.get("CMAP_PATH", "/usr/share/pdfminer/"), + os.path.join(os.path.dirname(__file__), "cmap"), + ) + for directory in cmap_paths: + path = os.path.join(directory, filename) + if os.path.exists(path): + gzfile = gzip.open(path) + try: + return type(str(name), (), pickle.loads(gzfile.read())) + finally: + gzfile.close() + else: + raise CMapDB.CMapNotFound(name) + + @classmethod + def get_cmap(cls, name: str) -> CMapBase: + if name == "Identity-H": + return IdentityCMap(WMode=0) + elif name == "Identity-V": + return IdentityCMap(WMode=1) + elif name == "OneByteIdentityH": + return IdentityCMapByte(WMode=0) + elif name == "OneByteIdentityV": + return IdentityCMapByte(WMode=1) + try: + return cls._cmap_cache[name] + except KeyError: + pass + data = cls._load_data(name) + cls._cmap_cache[name] = cmap = PyCMap(name, data) + return cmap + + @classmethod + def get_unicode_map(cls, name: str, vertical: bool = False) -> UnicodeMap: + try: + return cls._umap_cache[name][vertical] + except KeyError: + pass + data = cls._load_data("to-unicode-%s" % name) + cls._umap_cache[name] = [PyUnicodeMap(name, data, v) for v in (False, True)] + return cls._umap_cache[name][vertical] + + +class CMapParser(PSStackParser[PSKeyword]): + def __init__(self, cmap: CMapBase, fp: BinaryIO) -> None: + PSStackParser.__init__(self, fp) + self.cmap = cmap + # some ToUnicode maps don't have "begincmap" keyword. + self._in_cmap = True + self._warnings: Set[str] = set() + return + + def run(self) -> None: + try: + self.nextobject() + except PSEOF: + pass + return + + KEYWORD_BEGINCMAP = KWD(b"begincmap") + KEYWORD_ENDCMAP = KWD(b"endcmap") + KEYWORD_USECMAP = KWD(b"usecmap") + KEYWORD_DEF = KWD(b"def") + KEYWORD_BEGINCODESPACERANGE = KWD(b"begincodespacerange") + KEYWORD_ENDCODESPACERANGE = KWD(b"endcodespacerange") + KEYWORD_BEGINCIDRANGE = KWD(b"begincidrange") + KEYWORD_ENDCIDRANGE = KWD(b"endcidrange") + KEYWORD_BEGINCIDCHAR = KWD(b"begincidchar") + KEYWORD_ENDCIDCHAR = KWD(b"endcidchar") + KEYWORD_BEGINBFRANGE = KWD(b"beginbfrange") + KEYWORD_ENDBFRANGE = KWD(b"endbfrange") + KEYWORD_BEGINBFCHAR = KWD(b"beginbfchar") + KEYWORD_ENDBFCHAR = KWD(b"endbfchar") + KEYWORD_BEGINNOTDEFRANGE = KWD(b"beginnotdefrange") + KEYWORD_ENDNOTDEFRANGE = KWD(b"endnotdefrange") + + def do_keyword(self, pos: int, token: PSKeyword) -> None: + """ToUnicode CMaps + + See Section 5.9.2 - ToUnicode CMaps of the PDF Reference. + """ + if token is self.KEYWORD_BEGINCMAP: + self._in_cmap = True + self.popall() + return + + elif token is self.KEYWORD_ENDCMAP: + self._in_cmap = False + return + + if not self._in_cmap: + return + + if token is self.KEYWORD_DEF: + try: + ((_, k), (_, v)) = self.pop(2) + self.cmap.set_attr(literal_name(k), v) + except PSSyntaxError: + pass + return + + if token is self.KEYWORD_USECMAP: + try: + ((_, cmapname),) = self.pop(1) + self.cmap.use_cmap(CMapDB.get_cmap(literal_name(cmapname))) + except PSSyntaxError: + pass + except CMapDB.CMapNotFound: + pass + return + + if token is self.KEYWORD_BEGINCODESPACERANGE: + self.popall() + return + if token is self.KEYWORD_ENDCODESPACERANGE: + self.popall() + return + + if token is self.KEYWORD_BEGINCIDRANGE: + self.popall() + return + + if token is self.KEYWORD_ENDCIDRANGE: + objs = [obj for (__, obj) in self.popall()] + for (start_byte, end_byte, cid) in choplist(3, objs): + if not isinstance(start_byte, bytes): + self._warn_once("The start object of begincidrange is not a byte.") + continue + if not isinstance(end_byte, bytes): + self._warn_once("The end object of begincidrange is not a byte.") + continue + if not isinstance(cid, int): + self._warn_once("The cid object of begincidrange is not a byte.") + continue + if len(start_byte) != len(end_byte): + self._warn_once( + "The start and end byte of begincidrange have " + "different lengths." + ) + continue + start_prefix = start_byte[:-4] + end_prefix = end_byte[:-4] + if start_prefix != end_prefix: + self._warn_once( + "The prefix of the start and end byte of " + "begincidrange are not the same." + ) + continue + svar = start_byte[-4:] + evar = end_byte[-4:] + start = nunpack(svar) + end = nunpack(evar) + vlen = len(svar) + for i in range(end - start + 1): + x = start_prefix + struct.pack(">L", start + i)[-vlen:] + self.cmap.add_cid2unichr(cid + i, x) + return + + if token is self.KEYWORD_BEGINCIDCHAR: + self.popall() + return + + if token is self.KEYWORD_ENDCIDCHAR: + objs = [obj for (__, obj) in self.popall()] + for (cid, code) in choplist(2, objs): + if isinstance(code, bytes) and isinstance(cid, int): + self.cmap.add_cid2unichr(cid, code) + return + + if token is self.KEYWORD_BEGINBFRANGE: + self.popall() + return + + if token is self.KEYWORD_ENDBFRANGE: + objs = [obj for (__, obj) in self.popall()] + for (start_byte, end_byte, code) in choplist(3, objs): + if not isinstance(start_byte, bytes): + self._warn_once("The start object is not a byte.") + continue + if not isinstance(end_byte, bytes): + self._warn_once("The end object is not a byte.") + continue + if len(start_byte) != len(end_byte): + self._warn_once("The start and end byte have different lengths.") + continue + start = nunpack(start_byte) + end = nunpack(end_byte) + if isinstance(code, list): + if len(code) != end - start + 1: + self._warn_once( + "The difference between the start and end " + "offsets does not match the code length." + ) + for cid, unicode_value in zip(range(start, end + 1), code): + self.cmap.add_cid2unichr(cid, unicode_value) + else: + assert isinstance(code, bytes) + var = code[-4:] + base = nunpack(var) + prefix = code[:-4] + vlen = len(var) + for i in range(end - start + 1): + x = prefix + struct.pack(">L", base + i)[-vlen:] + self.cmap.add_cid2unichr(start + i, x) + return + + if token is self.KEYWORD_BEGINBFCHAR: + self.popall() + return + + if token is self.KEYWORD_ENDBFCHAR: + objs = [obj for (__, obj) in self.popall()] + for (cid, code) in choplist(2, objs): + if isinstance(cid, bytes) and isinstance(code, bytes): + self.cmap.add_cid2unichr(nunpack(cid), code) + return + + if token is self.KEYWORD_BEGINNOTDEFRANGE: + self.popall() + return + + if token is self.KEYWORD_ENDNOTDEFRANGE: + self.popall() + return + + self.push((pos, token)) + + def _warn_once(self, msg: str) -> None: + """Warn once for each unique message""" + if msg not in self._warnings: + self._warnings.add(msg) + base_msg = ( + "Ignoring (part of) ToUnicode map because the PDF data " + "does not conform to the format. This could result in " + "(cid) values in the output. " + ) + log.warning(base_msg + msg) + + +def main(argv: List[str]) -> None: + from warnings import warn + + warn( + "The function main() from cmapdb.py will be removed in 2023. It was probably " + "introduced for testing purposes a long time ago, and no longer relevant. " + "Feel free to create a GitHub issue if you disagree.", + DeprecationWarning, + ) + + args = argv[1:] + for fname in args: + fp = open(fname, "rb") + cmap = FileUnicodeMap() + CMapParser(cmap, fp).run() + fp.close() + cmap.dump() + return + + +if __name__ == "__main__": + main(sys.argv) diff --git a/templates/skills/file_manager/dependencies/pdfminer/converter.py b/templates/skills/file_manager/dependencies/pdfminer/converter.py new file mode 100644 index 00000000..9b90a769 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/converter.py @@ -0,0 +1,1023 @@ +import io +import logging +import re +from typing import ( + BinaryIO, + Dict, + Generic, + List, + Optional, + Sequence, + TextIO, + Tuple, + TypeVar, + Union, + cast, +) + +from pdfminer.pdfcolor import PDFColorSpace +from . import utils +from .image import ImageWriter +from .layout import LAParams, LTComponent, TextGroupElement +from .layout import LTAnno +from .layout import LTChar +from .layout import LTContainer +from .layout import LTCurve +from .layout import LTFigure +from .layout import LTImage +from .layout import LTItem +from .layout import LTLayoutContainer +from .layout import LTLine +from .layout import LTPage +from .layout import LTRect +from .layout import LTText +from .layout import LTTextBox +from .layout import LTTextBoxVertical +from .layout import LTTextGroup +from .layout import LTTextLine +from .pdfdevice import PDFTextDevice +from .pdffont import PDFFont +from .pdffont import PDFUnicodeNotDefined +from .pdfinterp import PDFGraphicState, PDFResourceManager +from .pdfpage import PDFPage +from .pdftypes import PDFStream +from .pdfexceptions import PDFValueError +from .utils import AnyIO, Point, Matrix, Rect, PathSegment, make_compat_str +from .utils import apply_matrix_pt +from .utils import bbox2str +from .utils import enc +from .utils import mult_matrix + +log = logging.getLogger(__name__) + + +class PDFLayoutAnalyzer(PDFTextDevice): + cur_item: LTLayoutContainer + ctm: Matrix + + def __init__( + self, + rsrcmgr: PDFResourceManager, + pageno: int = 1, + laparams: Optional[LAParams] = None, + ) -> None: + PDFTextDevice.__init__(self, rsrcmgr) + self.pageno = pageno + self.laparams = laparams + self._stack: List[LTLayoutContainer] = [] + + def begin_page(self, page: PDFPage, ctm: Matrix) -> None: + (x0, y0, x1, y1) = page.mediabox + (x0, y0) = apply_matrix_pt(ctm, (x0, y0)) + (x1, y1) = apply_matrix_pt(ctm, (x1, y1)) + mediabox = (0, 0, abs(x0 - x1), abs(y0 - y1)) + self.cur_item = LTPage(self.pageno, mediabox) + + def end_page(self, page: PDFPage) -> None: + assert not self._stack, str(len(self._stack)) + assert isinstance(self.cur_item, LTPage), str(type(self.cur_item)) + if self.laparams is not None: + self.cur_item.analyze(self.laparams) + self.pageno += 1 + self.receive_layout(self.cur_item) + + def begin_figure(self, name: str, bbox: Rect, matrix: Matrix) -> None: + self._stack.append(self.cur_item) + self.cur_item = LTFigure(name, bbox, mult_matrix(matrix, self.ctm)) + + def end_figure(self, _: str) -> None: + fig = self.cur_item + assert isinstance(self.cur_item, LTFigure), str(type(self.cur_item)) + self.cur_item = self._stack.pop() + self.cur_item.add(fig) + + def render_image(self, name: str, stream: PDFStream) -> None: + assert isinstance(self.cur_item, LTFigure), str(type(self.cur_item)) + item = LTImage( + name, + stream, + (self.cur_item.x0, self.cur_item.y0, self.cur_item.x1, self.cur_item.y1), + ) + self.cur_item.add(item) + + def paint_path( + self, + gstate: PDFGraphicState, + stroke: bool, + fill: bool, + evenodd: bool, + path: Sequence[PathSegment], + ) -> None: + """Paint paths described in section 4.4 of the PDF reference manual""" + shape = "".join(x[0] for x in path) + + if shape[:1] != "m": + # Per PDF Reference Section 4.4.1, "path construction operators may + # be invoked in any sequence, but the first one invoked must be m + # or re to begin a new subpath." Since pdfminer.six already + # converts all `re` (rectangle) operators to their equivelent + # `mlllh` representation, paths ingested by `.paint_path(...)` that + # do not begin with the `m` operator are invalid. + pass + + elif shape.count("m") > 1: + # recurse if there are multiple m's in this shape + for m in re.finditer(r"m[^m]+", shape): + subpath = path[m.start(0) : m.end(0)] + self.paint_path(gstate, stroke, fill, evenodd, subpath) + + else: + # Although the 'h' command does not not literally provide a + # point-position, its position is (by definition) equal to the + # subpath's starting point. + # + # And, per Section 4.4's Table 4.9, all other path commands place + # their point-position in their final two arguments. (Any preceding + # arguments represent control points on Bézier curves.) + raw_pts = [ + cast(Point, p[-2:] if p[0] != "h" else path[0][-2:]) for p in path + ] + pts = [apply_matrix_pt(self.ctm, pt) for pt in raw_pts] + + operators = [str(operation[0]) for operation in path] + transformed_points = [ + [ + apply_matrix_pt(self.ctm, (float(operand1), float(operand2))) + for operand1, operand2 in zip(operation[1::2], operation[2::2]) + ] + for operation in path + ] + transformed_path = [ + cast(PathSegment, (o, *p)) + for o, p in zip(operators, transformed_points) + ] + + if shape in {"mlh", "ml"}: + # single line segment + # + # Note: 'ml', in conditional above, is a frequent anomaly + # that we want to support. + line = LTLine( + gstate.linewidth, + pts[0], + pts[1], + stroke, + fill, + evenodd, + gstate.scolor, + gstate.ncolor, + original_path=transformed_path, + dashing_style=gstate.dash, + ) + self.cur_item.add(line) + + elif shape in {"mlllh", "mllll"}: + (x0, y0), (x1, y1), (x2, y2), (x3, y3), _ = pts + + is_closed_loop = pts[0] == pts[4] + has_square_coordinates = ( + x0 == x1 and y1 == y2 and x2 == x3 and y3 == y0 + ) or (y0 == y1 and x1 == x2 and y2 == y3 and x3 == x0) + if is_closed_loop and has_square_coordinates: + rect = LTRect( + gstate.linewidth, + (*pts[0], *pts[2]), + stroke, + fill, + evenodd, + gstate.scolor, + gstate.ncolor, + transformed_path, + gstate.dash, + ) + self.cur_item.add(rect) + else: + curve = LTCurve( + gstate.linewidth, + pts, + stroke, + fill, + evenodd, + gstate.scolor, + gstate.ncolor, + transformed_path, + gstate.dash, + ) + self.cur_item.add(curve) + else: + curve = LTCurve( + gstate.linewidth, + pts, + stroke, + fill, + evenodd, + gstate.scolor, + gstate.ncolor, + transformed_path, + gstate.dash, + ) + self.cur_item.add(curve) + + def render_char( + self, + matrix: Matrix, + font: PDFFont, + fontsize: float, + scaling: float, + rise: float, + cid: int, + ncs: PDFColorSpace, + graphicstate: PDFGraphicState, + ) -> float: + try: + text = font.to_unichr(cid) + assert isinstance(text, str), str(type(text)) + except PDFUnicodeNotDefined: + text = self.handle_undefined_char(font, cid) + textwidth = font.char_width(cid) + textdisp = font.char_disp(cid) + item = LTChar( + matrix, + font, + fontsize, + scaling, + rise, + text, + textwidth, + textdisp, + ncs, + graphicstate, + ) + self.cur_item.add(item) + return item.adv + + def handle_undefined_char(self, font: PDFFont, cid: int) -> str: + log.debug("undefined: %r, %r", font, cid) + return "(cid:%d)" % cid + + def receive_layout(self, ltpage: LTPage) -> None: + pass + + +class PDFPageAggregator(PDFLayoutAnalyzer): + def __init__( + self, + rsrcmgr: PDFResourceManager, + pageno: int = 1, + laparams: Optional[LAParams] = None, + ) -> None: + PDFLayoutAnalyzer.__init__(self, rsrcmgr, pageno=pageno, laparams=laparams) + self.result: Optional[LTPage] = None + + def receive_layout(self, ltpage: LTPage) -> None: + self.result = ltpage + + def get_result(self) -> LTPage: + assert self.result is not None + return self.result + + +# Some PDFConverter children support only binary I/O +IOType = TypeVar("IOType", TextIO, BinaryIO, AnyIO) + + +class PDFConverter(PDFLayoutAnalyzer, Generic[IOType]): + def __init__( + self, + rsrcmgr: PDFResourceManager, + outfp: IOType, + codec: str = "utf-8", + pageno: int = 1, + laparams: Optional[LAParams] = None, + ) -> None: + PDFLayoutAnalyzer.__init__(self, rsrcmgr, pageno=pageno, laparams=laparams) + self.outfp: IOType = outfp + self.codec = codec + self.outfp_binary = self._is_binary_stream(self.outfp) + + @staticmethod + def _is_binary_stream(outfp: AnyIO) -> bool: + """Test if an stream is binary or not""" + if "b" in getattr(outfp, "mode", ""): + return True + elif hasattr(outfp, "mode"): + # output stream has a mode, but it does not contain 'b' + return False + elif isinstance(outfp, io.BytesIO): + return True + elif isinstance(outfp, io.StringIO): + return False + elif isinstance(outfp, io.TextIOBase): + return False + + return True + + +class TextConverter(PDFConverter[AnyIO]): + def __init__( + self, + rsrcmgr: PDFResourceManager, + outfp: AnyIO, + codec: str = "utf-8", + pageno: int = 1, + laparams: Optional[LAParams] = None, + showpageno: bool = False, + imagewriter: Optional[ImageWriter] = None, + ) -> None: + super().__init__(rsrcmgr, outfp, codec=codec, pageno=pageno, laparams=laparams) + self.showpageno = showpageno + self.imagewriter = imagewriter + + def write_text(self, text: str) -> None: + text = utils.compatible_encode_method(text, self.codec, "ignore") + if self.outfp_binary: + cast(BinaryIO, self.outfp).write(text.encode()) + else: + cast(TextIO, self.outfp).write(text) + + def receive_layout(self, ltpage: LTPage) -> None: + def render(item: LTItem) -> None: + if isinstance(item, LTContainer): + for child in item: + render(child) + elif isinstance(item, LTText): + self.write_text(item.get_text()) + if isinstance(item, LTTextBox): + self.write_text("\n") + elif isinstance(item, LTImage): + if self.imagewriter is not None: + self.imagewriter.export_image(item) + + if self.showpageno: + self.write_text("Page %s\n" % ltpage.pageid) + render(ltpage) + self.write_text("\f") + + # Some dummy functions to save memory/CPU when all that is wanted + # is text. This stops all the image and drawing output from being + # recorded and taking up RAM. + def render_image(self, name: str, stream: PDFStream) -> None: + if self.imagewriter is None: + return + PDFConverter.render_image(self, name, stream) + return + + def paint_path( + self, + gstate: PDFGraphicState, + stroke: bool, + fill: bool, + evenodd: bool, + path: Sequence[PathSegment], + ) -> None: + return + + +class HTMLConverter(PDFConverter[AnyIO]): + RECT_COLORS = { + "figure": "yellow", + "textline": "magenta", + "textbox": "cyan", + "textgroup": "red", + "curve": "black", + "page": "gray", + } + + TEXT_COLORS = { + "textbox": "blue", + "char": "black", + } + + def __init__( + self, + rsrcmgr: PDFResourceManager, + outfp: AnyIO, + codec: str = "utf-8", + pageno: int = 1, + laparams: Optional[LAParams] = None, + scale: float = 1, + fontscale: float = 1.0, + layoutmode: str = "normal", + showpageno: bool = True, + pagemargin: int = 50, + imagewriter: Optional[ImageWriter] = None, + debug: int = 0, + rect_colors: Optional[Dict[str, str]] = None, + text_colors: Optional[Dict[str, str]] = None, + ) -> None: + PDFConverter.__init__( + self, rsrcmgr, outfp, codec=codec, pageno=pageno, laparams=laparams + ) + + # write() assumes a codec for binary I/O, or no codec for text I/O. + if self.outfp_binary and not self.codec: + raise PDFValueError("Codec is required for a binary I/O output") + if not self.outfp_binary and self.codec: + raise PDFValueError("Codec must not be specified for a text I/O output") + + if text_colors is None: + text_colors = {"char": "black"} + if rect_colors is None: + rect_colors = {"curve": "black", "page": "gray"} + + self.scale = scale + self.fontscale = fontscale + self.layoutmode = layoutmode + self.showpageno = showpageno + self.pagemargin = pagemargin + self.imagewriter = imagewriter + self.rect_colors = rect_colors + self.text_colors = text_colors + if debug: + self.rect_colors.update(self.RECT_COLORS) + self.text_colors.update(self.TEXT_COLORS) + self._yoffset: float = self.pagemargin + self._font: Optional[Tuple[str, float]] = None + self._fontstack: List[Optional[Tuple[str, float]]] = [] + self.write_header() + return + + def write(self, text: str) -> None: + if self.codec: + cast(BinaryIO, self.outfp).write(text.encode(self.codec)) + else: + cast(TextIO, self.outfp).write(text) + return + + def write_header(self) -> None: + self.write("\n") + if self.codec: + s = ( + '\n' % self.codec + ) + else: + s = '\n' + self.write(s) + self.write("\n") + return + + def write_footer(self) -> None: + page_links = [f'{i}' for i in range(1, self.pageno)] + s = '
Page: %s
\n' % ", ".join( + page_links + ) + self.write(s) + self.write("\n") + return + + def write_text(self, text: str) -> None: + self.write(enc(text)) + return + + def place_rect( + self, color: str, borderwidth: int, x: float, y: float, w: float, h: float + ) -> None: + color2 = self.rect_colors.get(color) + if color2 is not None: + s = ( + '\n' + % ( + color2, + borderwidth, + x * self.scale, + (self._yoffset - y) * self.scale, + w * self.scale, + h * self.scale, + ) + ) + self.write(s) + return + + def place_border(self, color: str, borderwidth: int, item: LTComponent) -> None: + self.place_rect(color, borderwidth, item.x0, item.y1, item.width, item.height) + return + + def place_image( + self, item: LTImage, borderwidth: int, x: float, y: float, w: float, h: float + ) -> None: + if self.imagewriter is not None: + name = self.imagewriter.export_image(item) + s = ( + '\n' + % ( + enc(name), + borderwidth, + x * self.scale, + (self._yoffset - y) * self.scale, + w * self.scale, + h * self.scale, + ) + ) + self.write(s) + return + + def place_text( + self, color: str, text: str, x: float, y: float, size: float + ) -> None: + color2 = self.text_colors.get(color) + if color2 is not None: + s = ( + '' + % ( + color2, + x * self.scale, + (self._yoffset - y) * self.scale, + size * self.scale * self.fontscale, + ) + ) + self.write(s) + self.write_text(text) + self.write("\n") + return + + def begin_div( + self, + color: str, + borderwidth: int, + x: float, + y: float, + w: float, + h: float, + writing_mode: str = "False", + ) -> None: + self._fontstack.append(self._font) + self._font = None + s = ( + '
' + % ( + color, + borderwidth, + writing_mode, + x * self.scale, + (self._yoffset - y) * self.scale, + w * self.scale, + h * self.scale, + ) + ) + self.write(s) + return + + def end_div(self, color: str) -> None: + if self._font is not None: + self.write("") + self._font = self._fontstack.pop() + self.write("
") + return + + def put_text(self, text: str, fontname: str, fontsize: float) -> None: + font = (fontname, fontsize) + if font != self._font: + if self._font is not None: + self.write("") + # Remove subset tag from fontname, see PDF Reference 5.5.3 + fontname_without_subset_tag = fontname.split("+")[-1] + self.write( + '' + % (fontname_without_subset_tag, fontsize * self.scale * self.fontscale) + ) + self._font = font + self.write_text(text) + return + + def put_newline(self) -> None: + self.write("
") + return + + def receive_layout(self, ltpage: LTPage) -> None: + def show_group(item: Union[LTTextGroup, TextGroupElement]) -> None: + if isinstance(item, LTTextGroup): + self.place_border("textgroup", 1, item) + for child in item: + show_group(child) + return + + def render(item: LTItem) -> None: + child: LTItem + if isinstance(item, LTPage): + self._yoffset += item.y1 + self.place_border("page", 1, item) + if self.showpageno: + self.write( + '
' + % ((self._yoffset - item.y1) * self.scale) + ) + self.write( + 'Page {}
\n'.format( + item.pageid, item.pageid + ) + ) + for child in item: + render(child) + if item.groups is not None: + for group in item.groups: + show_group(group) + elif isinstance(item, LTCurve): + self.place_border("curve", 1, item) + elif isinstance(item, LTFigure): + self.begin_div("figure", 1, item.x0, item.y1, item.width, item.height) + for child in item: + render(child) + self.end_div("figure") + elif isinstance(item, LTImage): + self.place_image(item, 1, item.x0, item.y1, item.width, item.height) + else: + if self.layoutmode == "exact": + if isinstance(item, LTTextLine): + self.place_border("textline", 1, item) + for child in item: + render(child) + elif isinstance(item, LTTextBox): + self.place_border("textbox", 1, item) + self.place_text( + "textbox", str(item.index + 1), item.x0, item.y1, 20 + ) + for child in item: + render(child) + elif isinstance(item, LTChar): + self.place_border("char", 1, item) + self.place_text( + "char", item.get_text(), item.x0, item.y1, item.size + ) + else: + if isinstance(item, LTTextLine): + for child in item: + render(child) + if self.layoutmode != "loose": + self.put_newline() + elif isinstance(item, LTTextBox): + self.begin_div( + "textbox", + 1, + item.x0, + item.y1, + item.width, + item.height, + item.get_writing_mode(), + ) + for child in item: + render(child) + self.end_div("textbox") + elif isinstance(item, LTChar): + fontname = make_compat_str(item.fontname) + self.put_text(item.get_text(), fontname, item.size) + elif isinstance(item, LTText): + self.write_text(item.get_text()) + return + + render(ltpage) + self._yoffset += self.pagemargin + return + + def close(self) -> None: + self.write_footer() + return + + +class XMLConverter(PDFConverter[AnyIO]): + + CONTROL = re.compile("[\x00-\x08\x0b-\x0c\x0e-\x1f]") + + def __init__( + self, + rsrcmgr: PDFResourceManager, + outfp: AnyIO, + codec: str = "utf-8", + pageno: int = 1, + laparams: Optional[LAParams] = None, + imagewriter: Optional[ImageWriter] = None, + stripcontrol: bool = False, + ) -> None: + PDFConverter.__init__( + self, rsrcmgr, outfp, codec=codec, pageno=pageno, laparams=laparams + ) + + # write() assumes a codec for binary I/O, or no codec for text I/O. + if self.outfp_binary == (not self.codec): + raise PDFValueError("Codec is required for a binary I/O output") + + self.imagewriter = imagewriter + self.stripcontrol = stripcontrol + self.write_header() + return + + def write(self, text: str) -> None: + if self.codec: + cast(BinaryIO, self.outfp).write(text.encode(self.codec)) + else: + cast(TextIO, self.outfp).write(text) + return + + def write_header(self) -> None: + if self.codec: + self.write('\n' % self.codec) + else: + self.write('\n') + self.write("\n") + return + + def write_footer(self) -> None: + self.write("\n") + return + + def write_text(self, text: str) -> None: + if self.stripcontrol: + text = self.CONTROL.sub("", text) + self.write(enc(text)) + return + + def receive_layout(self, ltpage: LTPage) -> None: + def show_group(item: LTItem) -> None: + if isinstance(item, LTTextBox): + self.write( + '\n' + % (item.index, bbox2str(item.bbox)) + ) + elif isinstance(item, LTTextGroup): + self.write('\n' % bbox2str(item.bbox)) + for child in item: + show_group(child) + self.write("\n") + return + + def render(item: LTItem) -> None: + child: LTItem + if isinstance(item, LTPage): + s = '\n' % ( + item.pageid, + bbox2str(item.bbox), + item.rotate, + ) + self.write(s) + for child in item: + render(child) + if item.groups is not None: + self.write("\n") + for group in item.groups: + show_group(group) + self.write("\n") + self.write("\n") + elif isinstance(item, LTLine): + s = '\n' % ( + item.linewidth, + bbox2str(item.bbox), + ) + self.write(s) + elif isinstance(item, LTRect): + s = '\n' % ( + item.linewidth, + bbox2str(item.bbox), + ) + self.write(s) + elif isinstance(item, LTCurve): + s = '\n' % ( + item.linewidth, + bbox2str(item.bbox), + item.get_pts(), + ) + self.write(s) + elif isinstance(item, LTFigure): + s = '
\n'.format( + item.name, bbox2str(item.bbox) + ) + self.write(s) + for child in item: + render(child) + self.write("
\n") + elif isinstance(item, LTTextLine): + self.write('\n' % bbox2str(item.bbox)) + for child in item: + render(child) + self.write("\n") + elif isinstance(item, LTTextBox): + wmode = "" + if isinstance(item, LTTextBoxVertical): + wmode = ' wmode="vertical"' + s = '\n' % ( + item.index, + bbox2str(item.bbox), + wmode, + ) + self.write(s) + for child in item: + render(child) + self.write("\n") + elif isinstance(item, LTChar): + s = ( + '' + % ( + enc(item.fontname), + bbox2str(item.bbox), + item.ncs.name, + item.graphicstate.ncolor, + item.size, + ) + ) + self.write(s) + self.write_text(item.get_text()) + self.write("\n") + elif isinstance(item, LTText): + self.write("%s\n" % item.get_text()) + elif isinstance(item, LTImage): + if self.imagewriter is not None: + name = self.imagewriter.export_image(item) + self.write( + '\n' + % (enc(name), item.width, item.height) + ) + else: + self.write( + '\n' % (item.width, item.height) + ) + else: + assert False, str(("Unhandled", item)) + return + + render(ltpage) + return + + def close(self) -> None: + self.write_footer() + return + + +class HOCRConverter(PDFConverter[AnyIO]): + """Extract an hOCR representation from explicit text information within a PDF.""" + + # Where text is being extracted from a variety of types of PDF within a + # business process, those PDFs where the text is only present in image + # form will need to be analysed using an OCR tool which will typically + # output hOCR. This converter extracts the explicit text information from + # those PDFs that do have it and uses it to genxerate a basic hOCR + # representation that is designed to be used in conjunction with the image + # of the PDF in the same way as genuine OCR output would be, but without the + # inevitable OCR errors. + + # The converter does not handle images, diagrams or text colors. + + # In the examples processed by the contributor it was necessary to set + # LAParams.all_texts to True. + + CONTROL = re.compile(r"[\x00-\x08\x0b-\x0c\x0e-\x1f]") + + def __init__( + self, + rsrcmgr: PDFResourceManager, + outfp: AnyIO, + codec: str = "utf8", + pageno: int = 1, + laparams: Optional[LAParams] = None, + stripcontrol: bool = False, + ): + PDFConverter.__init__( + self, rsrcmgr, outfp, codec=codec, pageno=pageno, laparams=laparams + ) + self.stripcontrol = stripcontrol + self.within_chars = False + self.write_header() + + def bbox_repr(self, bbox: Rect) -> str: + (in_x0, in_y0, in_x1, in_y1) = bbox + # PDF y-coordinates are the other way round from hOCR coordinates + out_x0 = int(in_x0) + out_y0 = int(self.page_bbox[3] - in_y1) + out_x1 = int(in_x1) + out_y1 = int(self.page_bbox[3] - in_y0) + return f"bbox {out_x0} {out_y0} {out_x1} {out_y1}" + + def write(self, text: str) -> None: + if self.codec: + encoded_text = text.encode(self.codec) + cast(BinaryIO, self.outfp).write(encoded_text) + else: + cast(TextIO, self.outfp).write(text) + + def write_header(self) -> None: + if self.codec: + self.write( + "\n" % self.codec + ) + else: + self.write( + "\n" + ) + self.write("\n") + self.write("\n") + self.write( + "\n" + ) + self.write( + "\n" + ) + self.write( + " \n" + ) + self.write("\n") + self.write("\n") + + def write_footer(self) -> None: + self.write("\n") + self.write( + "\n" + ) + + def write_text(self, text: str) -> None: + if self.stripcontrol: + text = self.CONTROL.sub("", text) + self.write(text) + + def write_word(self) -> None: + if len(self.working_text) > 0: + bold_and_italic_styles = "" + if "Italic" in self.working_font: + bold_and_italic_styles = "font-style: italic; " + if "Bold" in self.working_font: + bold_and_italic_styles += "font-weight: bold; " + self.write( + "%s" + % ( + ( + self.working_font, + self.working_size, + bold_and_italic_styles, + self.bbox_repr(self.working_bbox), + self.working_font, + self.working_size, + self.working_text.strip(), + ) + ) + ) + self.within_chars = False + + def receive_layout(self, ltpage: LTPage) -> None: + def render(item: LTItem) -> None: + if self.within_chars and isinstance(item, LTAnno): + self.write_word() + if isinstance(item, LTPage): + self.page_bbox = item.bbox + self.write( + "
\n" + % (item.pageid, self.bbox_repr(item.bbox)) + ) + for child in item: + render(child) + self.write("
\n") + elif isinstance(item, LTTextLine): + self.write( + "" % (self.bbox_repr(item.bbox)) + ) + for child_line in item: + render(child_line) + self.write("\n") + elif isinstance(item, LTTextBox): + self.write( + "
\n" + % (item.index, self.bbox_repr(item.bbox)) + ) + for child in item: + render(child) + self.write("
\n") + elif isinstance(item, LTChar): + if not self.within_chars: + self.within_chars = True + self.working_text = item.get_text() + self.working_bbox = item.bbox + self.working_font = item.fontname + self.working_size = item.size + else: + if len(item.get_text().strip()) == 0: + self.write_word() + self.write(item.get_text()) + else: + if ( + self.working_bbox[1] != item.bbox[1] + or self.working_font != item.fontname + or self.working_size != item.size + ): + self.write_word() + self.working_bbox = item.bbox + self.working_font = item.fontname + self.working_size = item.size + self.working_text += item.get_text() + self.working_bbox = ( + self.working_bbox[0], + self.working_bbox[1], + item.bbox[2], + self.working_bbox[3], + ) + + render(ltpage) + + def close(self) -> None: + self.write_footer() diff --git a/templates/skills/file_manager/dependencies/pdfminer/data_structures.py b/templates/skills/file_manager/dependencies/pdfminer/data_structures.py new file mode 100644 index 00000000..6e3f985d --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/data_structures.py @@ -0,0 +1,52 @@ +from typing import Any, Iterable, List, Optional, Tuple + +from pdfminer import settings +from pdfminer.pdfparser import PDFSyntaxError +from pdfminer.pdftypes import list_value, int_value, dict_value +from pdfminer.utils import choplist + + +class NumberTree: + """A PDF number tree. + + See Section 3.8.6 of the PDF Reference. + """ + + def __init__(self, obj: Any): + self._obj = dict_value(obj) + self.nums: Optional[Iterable[Any]] = None + self.kids: Optional[Iterable[Any]] = None + self.limits: Optional[Iterable[Any]] = None + + if "Nums" in self._obj: + self.nums = list_value(self._obj["Nums"]) + if "Kids" in self._obj: + self.kids = list_value(self._obj["Kids"]) + if "Limits" in self._obj: + self.limits = list_value(self._obj["Limits"]) + + def _parse(self) -> List[Tuple[int, Any]]: + items = [] + if self.nums: # Leaf node + for k, v in choplist(2, self.nums): + items.append((int_value(k), v)) + + if self.kids: # Root or intermediate node + for child_ref in self.kids: + items += NumberTree(child_ref)._parse() + + return items + + values: List[Tuple[int, Any]] # workaround decorators unsupported by mypy + + @property # type: ignore[no-redef,misc] + def values(self) -> List[Tuple[int, Any]]: + values = self._parse() + + if settings.STRICT: + if not all(a[0] <= b[0] for a, b in zip(values, values[1:])): + raise PDFSyntaxError("Number tree elements are out of order") + else: + values.sort(key=lambda t: t[0]) + + return values diff --git a/templates/skills/file_manager/dependencies/pdfminer/encodingdb.py b/templates/skills/file_manager/dependencies/pdfminer/encodingdb.py new file mode 100644 index 00000000..2a08935e --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/encodingdb.py @@ -0,0 +1,126 @@ +import logging +import re +from typing import Dict, Iterable, Optional, cast + +from .pdfexceptions import PDFKeyError +from .glyphlist import glyphname2unicode +from .latin_enc import ENCODING +from .psparser import PSLiteral + +HEXADECIMAL = re.compile(r"[0-9a-fA-F]+") + +log = logging.getLogger(__name__) + + +def name2unicode(name: str) -> str: + """Converts Adobe glyph names to Unicode numbers. + + In contrast to the specification, this raises a KeyError instead of return + an empty string when the key is unknown. + This way the caller must explicitly define what to do + when there is not a match. + + Reference: + https://github.com/adobe-type-tools/agl-specification#2-the-mapping + + :returns unicode character if name resembles something, + otherwise a KeyError + """ + if not isinstance(name, str): + raise PDFKeyError( + 'Could not convert unicode name "%s" to character because ' + "it should be of type str but is of type %s" % (name, type(name)) + ) + + name = name.split(".")[0] + components = name.split("_") + + if len(components) > 1: + return "".join(map(name2unicode, components)) + + else: + if name in glyphname2unicode: + return glyphname2unicode[name] + + elif name.startswith("uni"): + name_without_uni = name.strip("uni") + + if HEXADECIMAL.match(name_without_uni) and len(name_without_uni) % 4 == 0: + unicode_digits = [ + int(name_without_uni[i : i + 4], base=16) + for i in range(0, len(name_without_uni), 4) + ] + for digit in unicode_digits: + raise_key_error_for_invalid_unicode(digit) + characters = map(chr, unicode_digits) + return "".join(characters) + + elif name.startswith("u"): + name_without_u = name.strip("u") + + if HEXADECIMAL.match(name_without_u) and 4 <= len(name_without_u) <= 6: + unicode_digit = int(name_without_u, base=16) + raise_key_error_for_invalid_unicode(unicode_digit) + return chr(unicode_digit) + + raise PDFKeyError( + 'Could not convert unicode name "%s" to character because ' + "it does not match specification" % name + ) + + +def raise_key_error_for_invalid_unicode(unicode_digit: int) -> None: + """Unicode values should not be in the range D800 through DFFF because + that is used for surrogate pairs in UTF-16 + + :raises KeyError if unicode digit is invalid + """ + if 55295 < unicode_digit < 57344: + raise PDFKeyError( + "Unicode digit %d is invalid because " + "it is in the range D800 through DFFF" % unicode_digit + ) + + +class EncodingDB: + + std2unicode: Dict[int, str] = {} + mac2unicode: Dict[int, str] = {} + win2unicode: Dict[int, str] = {} + pdf2unicode: Dict[int, str] = {} + for (name, std, mac, win, pdf) in ENCODING: + c = name2unicode(name) + if std: + std2unicode[std] = c + if mac: + mac2unicode[mac] = c + if win: + win2unicode[win] = c + if pdf: + pdf2unicode[pdf] = c + + encodings = { + "StandardEncoding": std2unicode, + "MacRomanEncoding": mac2unicode, + "WinAnsiEncoding": win2unicode, + "PDFDocEncoding": pdf2unicode, + } + + @classmethod + def get_encoding( + cls, name: str, diff: Optional[Iterable[object]] = None + ) -> Dict[int, str]: + cid2unicode = cls.encodings.get(name, cls.std2unicode) + if diff: + cid2unicode = cid2unicode.copy() + cid = 0 + for x in diff: + if isinstance(x, int): + cid = x + elif isinstance(x, PSLiteral): + try: + cid2unicode[cid] = name2unicode(cast(str, x.name)) + except (KeyError, ValueError) as e: + log.debug(str(e)) + cid += 1 + return cid2unicode diff --git a/templates/skills/file_manager/dependencies/pdfminer/fontmetrics.py b/templates/skills/file_manager/dependencies/pdfminer/fontmetrics.py new file mode 100644 index 00000000..72038a10 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/fontmetrics.py @@ -0,0 +1,4464 @@ +""" Font metrics for the Adobe core 14 fonts. + +Font metrics are used to compute the boundary of each character +written with a proportional font. + +The following data were extracted from the AFM files: + + http://www.ctan.org/tex-archive/fonts/adobe/afm/ + +""" + +### BEGIN Verbatim copy of the license part + +# +# Adobe Core 35 AFM Files with 314 Glyph Entries - ReadMe +# +# This file and the 35 PostScript(R) AFM files it accompanies may be +# used, copied, and distributed for any purpose and without charge, +# with or without modification, provided that all copyright notices +# are retained; that the AFM files are not distributed without this +# file; that all modifications to this file or any of the AFM files +# are prominently noted in the modified file(s); and that this +# paragraph is not modified. Adobe Systems has no responsibility or +# obligation to support the use of the AFM files. +# + +### END Verbatim copy of the license part + +# flake8: noqa +from typing import Dict + + +def convert_font_metrics(path: str) -> None: + """Convert an AFM file to a mapping of font metrics. + + See below for the output. + """ + fonts = {} + with open(path) as fileinput: + for line in fileinput.readlines(): + f = line.strip().split(" ") + if not f: + continue + k = f[0] + if k == "FontName": + fontname = f[1] + props = {"FontName": fontname, "Flags": 0} + chars: Dict[int, int] = {} + fonts[fontname] = (props, chars) + elif k == "C": + cid = int(f[1]) + if 0 <= cid and cid <= 255: + width = int(f[4]) + chars[cid] = width + elif k in ("CapHeight", "XHeight", "ItalicAngle", "Ascender", "Descender"): + k = {"Ascender": "Ascent", "Descender": "Descent"}.get(k, k) + props[k] = float(f[1]) + elif k in ("FontName", "FamilyName", "Weight"): + k = {"FamilyName": "FontFamily", "Weight": "FontWeight"}.get(k, k) + props[k] = f[1] + elif k == "IsFixedPitch": + if f[1].lower() == "true": + props["Flags"] = 64 + elif k == "FontBBox": + props[k] = tuple(map(float, f[1:5])) + print("# -*- python -*-") + print("FONT_METRICS = {") + for (fontname, (props, chars)) in fonts.items(): + print(f" {fontname!r}: {(props, chars)!r},") + print("}") + + +FONT_METRICS = { + "Courier": ( + { + "FontName": "Courier", + "Descent": -194.0, + "FontBBox": (-6.0, -249.0, 639.0, 803.0), + "FontWeight": "Medium", + "CapHeight": 572.0, + "FontFamily": "Courier", + "Flags": 64, + "XHeight": 434.0, + "ItalicAngle": 0.0, + "Ascent": 627.0, + }, + { + " ": 600, + "!": 600, + '"': 600, + "#": 600, + "$": 600, + "%": 600, + "&": 600, + "'": 600, + "(": 600, + ")": 600, + "*": 600, + "+": 600, + ",": 600, + "-": 600, + ".": 600, + "/": 600, + "0": 600, + "1": 600, + "2": 600, + "3": 600, + "4": 600, + "5": 600, + "6": 600, + "7": 600, + "8": 600, + "9": 600, + ":": 600, + ";": 600, + "<": 600, + "=": 600, + ">": 600, + "?": 600, + "@": 600, + "A": 600, + "B": 600, + "C": 600, + "D": 600, + "E": 600, + "F": 600, + "G": 600, + "H": 600, + "I": 600, + "J": 600, + "K": 600, + "L": 600, + "M": 600, + "N": 600, + "O": 600, + "P": 600, + "Q": 600, + "R": 600, + "S": 600, + "T": 600, + "U": 600, + "V": 600, + "W": 600, + "X": 600, + "Y": 600, + "Z": 600, + "[": 600, + "\\": 600, + "]": 600, + "^": 600, + "_": 600, + "`": 600, + "a": 600, + "b": 600, + "c": 600, + "d": 600, + "e": 600, + "f": 600, + "g": 600, + "h": 600, + "i": 600, + "j": 600, + "k": 600, + "l": 600, + "m": 600, + "n": 600, + "o": 600, + "p": 600, + "q": 600, + "r": 600, + "s": 600, + "t": 600, + "u": 600, + "v": 600, + "w": 600, + "x": 600, + "y": 600, + "z": 600, + "{": 600, + "|": 600, + "}": 600, + "~": 600, + "\xa1": 600, + "\xa2": 600, + "\xa3": 600, + "\xa4": 600, + "\xa5": 600, + "\xa6": 600, + "\xa7": 600, + "\xa8": 600, + "\xa9": 600, + "\xaa": 600, + "\xab": 600, + "\xac": 600, + "\xae": 600, + "\xaf": 600, + "\xb0": 600, + "\xb1": 600, + "\xb2": 600, + "\xb3": 600, + "\xb4": 600, + "\xb5": 600, + "\xb6": 600, + "\xb7": 600, + "\xb8": 600, + "\xb9": 600, + "\xba": 600, + "\xbb": 600, + "\xbc": 600, + "\xbd": 600, + "\xbe": 600, + "\xbf": 600, + "\xc0": 600, + "\xc1": 600, + "\xc2": 600, + "\xc3": 600, + "\xc4": 600, + "\xc5": 600, + "\xc6": 600, + "\xc7": 600, + "\xc8": 600, + "\xc9": 600, + "\xca": 600, + "\xcb": 600, + "\xcc": 600, + "\xcd": 600, + "\xce": 600, + "\xcf": 600, + "\xd0": 600, + "\xd1": 600, + "\xd2": 600, + "\xd3": 600, + "\xd4": 600, + "\xd5": 600, + "\xd6": 600, + "\xd7": 600, + "\xd8": 600, + "\xd9": 600, + "\xda": 600, + "\xdb": 600, + "\xdc": 600, + "\xdd": 600, + "\xde": 600, + "\xdf": 600, + "\xe0": 600, + "\xe1": 600, + "\xe2": 600, + "\xe3": 600, + "\xe4": 600, + "\xe5": 600, + "\xe6": 600, + "\xe7": 600, + "\xe8": 600, + "\xe9": 600, + "\xea": 600, + "\xeb": 600, + "\xec": 600, + "\xed": 600, + "\xee": 600, + "\xef": 600, + "\xf0": 600, + "\xf1": 600, + "\xf2": 600, + "\xf3": 600, + "\xf4": 600, + "\xf5": 600, + "\xf6": 600, + "\xf7": 600, + "\xf8": 600, + "\xf9": 600, + "\xfa": 600, + "\xfb": 600, + "\xfc": 600, + "\xfd": 600, + "\xfe": 600, + "\xff": 600, + "\u0100": 600, + "\u0101": 600, + "\u0102": 600, + "\u0103": 600, + "\u0104": 600, + "\u0105": 600, + "\u0106": 600, + "\u0107": 600, + "\u010c": 600, + "\u010d": 600, + "\u010e": 600, + "\u010f": 600, + "\u0110": 600, + "\u0111": 600, + "\u0112": 600, + "\u0113": 600, + "\u0116": 600, + "\u0117": 600, + "\u0118": 600, + "\u0119": 600, + "\u011a": 600, + "\u011b": 600, + "\u011e": 600, + "\u011f": 600, + "\u0122": 600, + "\u0123": 600, + "\u012a": 600, + "\u012b": 600, + "\u012e": 600, + "\u012f": 600, + "\u0130": 600, + "\u0131": 600, + "\u0136": 600, + "\u0137": 600, + "\u0139": 600, + "\u013a": 600, + "\u013b": 600, + "\u013c": 600, + "\u013d": 600, + "\u013e": 600, + "\u0141": 600, + "\u0142": 600, + "\u0143": 600, + "\u0144": 600, + "\u0145": 600, + "\u0146": 600, + "\u0147": 600, + "\u0148": 600, + "\u014c": 600, + "\u014d": 600, + "\u0150": 600, + "\u0151": 600, + "\u0152": 600, + "\u0153": 600, + "\u0154": 600, + "\u0155": 600, + "\u0156": 600, + "\u0157": 600, + "\u0158": 600, + "\u0159": 600, + "\u015a": 600, + "\u015b": 600, + "\u015e": 600, + "\u015f": 600, + "\u0160": 600, + "\u0161": 600, + "\u0162": 600, + "\u0163": 600, + "\u0164": 600, + "\u0165": 600, + "\u016a": 600, + "\u016b": 600, + "\u016e": 600, + "\u016f": 600, + "\u0170": 600, + "\u0171": 600, + "\u0172": 600, + "\u0173": 600, + "\u0178": 600, + "\u0179": 600, + "\u017a": 600, + "\u017b": 600, + "\u017c": 600, + "\u017d": 600, + "\u017e": 600, + "\u0192": 600, + "\u0218": 600, + "\u0219": 600, + "\u02c6": 600, + "\u02c7": 600, + "\u02d8": 600, + "\u02d9": 600, + "\u02da": 600, + "\u02db": 600, + "\u02dc": 600, + "\u02dd": 600, + "\u2013": 600, + "\u2014": 600, + "\u2018": 600, + "\u2019": 600, + "\u201a": 600, + "\u201c": 600, + "\u201d": 600, + "\u201e": 600, + "\u2020": 600, + "\u2021": 600, + "\u2022": 600, + "\u2026": 600, + "\u2030": 600, + "\u2039": 600, + "\u203a": 600, + "\u2044": 600, + "\u2122": 600, + "\u2202": 600, + "\u2206": 600, + "\u2211": 600, + "\u2212": 600, + "\u221a": 600, + "\u2260": 600, + "\u2264": 600, + "\u2265": 600, + "\u25ca": 600, + "\uf6c3": 600, + "\ufb01": 600, + "\ufb02": 600, + }, + ), + "Courier-Bold": ( + { + "FontName": "Courier-Bold", + "Descent": -194.0, + "FontBBox": (-88.0, -249.0, 697.0, 811.0), + "FontWeight": "Bold", + "CapHeight": 572.0, + "FontFamily": "Courier", + "Flags": 64, + "XHeight": 434.0, + "ItalicAngle": 0.0, + "Ascent": 627.0, + }, + { + " ": 600, + "!": 600, + '"': 600, + "#": 600, + "$": 600, + "%": 600, + "&": 600, + "'": 600, + "(": 600, + ")": 600, + "*": 600, + "+": 600, + ",": 600, + "-": 600, + ".": 600, + "/": 600, + "0": 600, + "1": 600, + "2": 600, + "3": 600, + "4": 600, + "5": 600, + "6": 600, + "7": 600, + "8": 600, + "9": 600, + ":": 600, + ";": 600, + "<": 600, + "=": 600, + ">": 600, + "?": 600, + "@": 600, + "A": 600, + "B": 600, + "C": 600, + "D": 600, + "E": 600, + "F": 600, + "G": 600, + "H": 600, + "I": 600, + "J": 600, + "K": 600, + "L": 600, + "M": 600, + "N": 600, + "O": 600, + "P": 600, + "Q": 600, + "R": 600, + "S": 600, + "T": 600, + "U": 600, + "V": 600, + "W": 600, + "X": 600, + "Y": 600, + "Z": 600, + "[": 600, + "\\": 600, + "]": 600, + "^": 600, + "_": 600, + "`": 600, + "a": 600, + "b": 600, + "c": 600, + "d": 600, + "e": 600, + "f": 600, + "g": 600, + "h": 600, + "i": 600, + "j": 600, + "k": 600, + "l": 600, + "m": 600, + "n": 600, + "o": 600, + "p": 600, + "q": 600, + "r": 600, + "s": 600, + "t": 600, + "u": 600, + "v": 600, + "w": 600, + "x": 600, + "y": 600, + "z": 600, + "{": 600, + "|": 600, + "}": 600, + "~": 600, + "\xa1": 600, + "\xa2": 600, + "\xa3": 600, + "\xa4": 600, + "\xa5": 600, + "\xa6": 600, + "\xa7": 600, + "\xa8": 600, + "\xa9": 600, + "\xaa": 600, + "\xab": 600, + "\xac": 600, + "\xae": 600, + "\xaf": 600, + "\xb0": 600, + "\xb1": 600, + "\xb2": 600, + "\xb3": 600, + "\xb4": 600, + "\xb5": 600, + "\xb6": 600, + "\xb7": 600, + "\xb8": 600, + "\xb9": 600, + "\xba": 600, + "\xbb": 600, + "\xbc": 600, + "\xbd": 600, + "\xbe": 600, + "\xbf": 600, + "\xc0": 600, + "\xc1": 600, + "\xc2": 600, + "\xc3": 600, + "\xc4": 600, + "\xc5": 600, + "\xc6": 600, + "\xc7": 600, + "\xc8": 600, + "\xc9": 600, + "\xca": 600, + "\xcb": 600, + "\xcc": 600, + "\xcd": 600, + "\xce": 600, + "\xcf": 600, + "\xd0": 600, + "\xd1": 600, + "\xd2": 600, + "\xd3": 600, + "\xd4": 600, + "\xd5": 600, + "\xd6": 600, + "\xd7": 600, + "\xd8": 600, + "\xd9": 600, + "\xda": 600, + "\xdb": 600, + "\xdc": 600, + "\xdd": 600, + "\xde": 600, + "\xdf": 600, + "\xe0": 600, + "\xe1": 600, + "\xe2": 600, + "\xe3": 600, + "\xe4": 600, + "\xe5": 600, + "\xe6": 600, + "\xe7": 600, + "\xe8": 600, + "\xe9": 600, + "\xea": 600, + "\xeb": 600, + "\xec": 600, + "\xed": 600, + "\xee": 600, + "\xef": 600, + "\xf0": 600, + "\xf1": 600, + "\xf2": 600, + "\xf3": 600, + "\xf4": 600, + "\xf5": 600, + "\xf6": 600, + "\xf7": 600, + "\xf8": 600, + "\xf9": 600, + "\xfa": 600, + "\xfb": 600, + "\xfc": 600, + "\xfd": 600, + "\xfe": 600, + "\xff": 600, + "\u0100": 600, + "\u0101": 600, + "\u0102": 600, + "\u0103": 600, + "\u0104": 600, + "\u0105": 600, + "\u0106": 600, + "\u0107": 600, + "\u010c": 600, + "\u010d": 600, + "\u010e": 600, + "\u010f": 600, + "\u0110": 600, + "\u0111": 600, + "\u0112": 600, + "\u0113": 600, + "\u0116": 600, + "\u0117": 600, + "\u0118": 600, + "\u0119": 600, + "\u011a": 600, + "\u011b": 600, + "\u011e": 600, + "\u011f": 600, + "\u0122": 600, + "\u0123": 600, + "\u012a": 600, + "\u012b": 600, + "\u012e": 600, + "\u012f": 600, + "\u0130": 600, + "\u0131": 600, + "\u0136": 600, + "\u0137": 600, + "\u0139": 600, + "\u013a": 600, + "\u013b": 600, + "\u013c": 600, + "\u013d": 600, + "\u013e": 600, + "\u0141": 600, + "\u0142": 600, + "\u0143": 600, + "\u0144": 600, + "\u0145": 600, + "\u0146": 600, + "\u0147": 600, + "\u0148": 600, + "\u014c": 600, + "\u014d": 600, + "\u0150": 600, + "\u0151": 600, + "\u0152": 600, + "\u0153": 600, + "\u0154": 600, + "\u0155": 600, + "\u0156": 600, + "\u0157": 600, + "\u0158": 600, + "\u0159": 600, + "\u015a": 600, + "\u015b": 600, + "\u015e": 600, + "\u015f": 600, + "\u0160": 600, + "\u0161": 600, + "\u0162": 600, + "\u0163": 600, + "\u0164": 600, + "\u0165": 600, + "\u016a": 600, + "\u016b": 600, + "\u016e": 600, + "\u016f": 600, + "\u0170": 600, + "\u0171": 600, + "\u0172": 600, + "\u0173": 600, + "\u0178": 600, + "\u0179": 600, + "\u017a": 600, + "\u017b": 600, + "\u017c": 600, + "\u017d": 600, + "\u017e": 600, + "\u0192": 600, + "\u0218": 600, + "\u0219": 600, + "\u02c6": 600, + "\u02c7": 600, + "\u02d8": 600, + "\u02d9": 600, + "\u02da": 600, + "\u02db": 600, + "\u02dc": 600, + "\u02dd": 600, + "\u2013": 600, + "\u2014": 600, + "\u2018": 600, + "\u2019": 600, + "\u201a": 600, + "\u201c": 600, + "\u201d": 600, + "\u201e": 600, + "\u2020": 600, + "\u2021": 600, + "\u2022": 600, + "\u2026": 600, + "\u2030": 600, + "\u2039": 600, + "\u203a": 600, + "\u2044": 600, + "\u2122": 600, + "\u2202": 600, + "\u2206": 600, + "\u2211": 600, + "\u2212": 600, + "\u221a": 600, + "\u2260": 600, + "\u2264": 600, + "\u2265": 600, + "\u25ca": 600, + "\uf6c3": 600, + "\ufb01": 600, + "\ufb02": 600, + }, + ), + "Courier-BoldOblique": ( + { + "FontName": "Courier-BoldOblique", + "Descent": -194.0, + "FontBBox": (-49.0, -249.0, 758.0, 811.0), + "FontWeight": "Bold", + "CapHeight": 572.0, + "FontFamily": "Courier", + "Flags": 64, + "XHeight": 434.0, + "ItalicAngle": -11.0, + "Ascent": 627.0, + }, + { + " ": 600, + "!": 600, + '"': 600, + "#": 600, + "$": 600, + "%": 600, + "&": 600, + "'": 600, + "(": 600, + ")": 600, + "*": 600, + "+": 600, + ",": 600, + "-": 600, + ".": 600, + "/": 600, + "0": 600, + "1": 600, + "2": 600, + "3": 600, + "4": 600, + "5": 600, + "6": 600, + "7": 600, + "8": 600, + "9": 600, + ":": 600, + ";": 600, + "<": 600, + "=": 600, + ">": 600, + "?": 600, + "@": 600, + "A": 600, + "B": 600, + "C": 600, + "D": 600, + "E": 600, + "F": 600, + "G": 600, + "H": 600, + "I": 600, + "J": 600, + "K": 600, + "L": 600, + "M": 600, + "N": 600, + "O": 600, + "P": 600, + "Q": 600, + "R": 600, + "S": 600, + "T": 600, + "U": 600, + "V": 600, + "W": 600, + "X": 600, + "Y": 600, + "Z": 600, + "[": 600, + "\\": 600, + "]": 600, + "^": 600, + "_": 600, + "`": 600, + "a": 600, + "b": 600, + "c": 600, + "d": 600, + "e": 600, + "f": 600, + "g": 600, + "h": 600, + "i": 600, + "j": 600, + "k": 600, + "l": 600, + "m": 600, + "n": 600, + "o": 600, + "p": 600, + "q": 600, + "r": 600, + "s": 600, + "t": 600, + "u": 600, + "v": 600, + "w": 600, + "x": 600, + "y": 600, + "z": 600, + "{": 600, + "|": 600, + "}": 600, + "~": 600, + "\xa1": 600, + "\xa2": 600, + "\xa3": 600, + "\xa4": 600, + "\xa5": 600, + "\xa6": 600, + "\xa7": 600, + "\xa8": 600, + "\xa9": 600, + "\xaa": 600, + "\xab": 600, + "\xac": 600, + "\xae": 600, + "\xaf": 600, + "\xb0": 600, + "\xb1": 600, + "\xb2": 600, + "\xb3": 600, + "\xb4": 600, + "\xb5": 600, + "\xb6": 600, + "\xb7": 600, + "\xb8": 600, + "\xb9": 600, + "\xba": 600, + "\xbb": 600, + "\xbc": 600, + "\xbd": 600, + "\xbe": 600, + "\xbf": 600, + "\xc0": 600, + "\xc1": 600, + "\xc2": 600, + "\xc3": 600, + "\xc4": 600, + "\xc5": 600, + "\xc6": 600, + "\xc7": 600, + "\xc8": 600, + "\xc9": 600, + "\xca": 600, + "\xcb": 600, + "\xcc": 600, + "\xcd": 600, + "\xce": 600, + "\xcf": 600, + "\xd0": 600, + "\xd1": 600, + "\xd2": 600, + "\xd3": 600, + "\xd4": 600, + "\xd5": 600, + "\xd6": 600, + "\xd7": 600, + "\xd8": 600, + "\xd9": 600, + "\xda": 600, + "\xdb": 600, + "\xdc": 600, + "\xdd": 600, + "\xde": 600, + "\xdf": 600, + "\xe0": 600, + "\xe1": 600, + "\xe2": 600, + "\xe3": 600, + "\xe4": 600, + "\xe5": 600, + "\xe6": 600, + "\xe7": 600, + "\xe8": 600, + "\xe9": 600, + "\xea": 600, + "\xeb": 600, + "\xec": 600, + "\xed": 600, + "\xee": 600, + "\xef": 600, + "\xf0": 600, + "\xf1": 600, + "\xf2": 600, + "\xf3": 600, + "\xf4": 600, + "\xf5": 600, + "\xf6": 600, + "\xf7": 600, + "\xf8": 600, + "\xf9": 600, + "\xfa": 600, + "\xfb": 600, + "\xfc": 600, + "\xfd": 600, + "\xfe": 600, + "\xff": 600, + "\u0100": 600, + "\u0101": 600, + "\u0102": 600, + "\u0103": 600, + "\u0104": 600, + "\u0105": 600, + "\u0106": 600, + "\u0107": 600, + "\u010c": 600, + "\u010d": 600, + "\u010e": 600, + "\u010f": 600, + "\u0110": 600, + "\u0111": 600, + "\u0112": 600, + "\u0113": 600, + "\u0116": 600, + "\u0117": 600, + "\u0118": 600, + "\u0119": 600, + "\u011a": 600, + "\u011b": 600, + "\u011e": 600, + "\u011f": 600, + "\u0122": 600, + "\u0123": 600, + "\u012a": 600, + "\u012b": 600, + "\u012e": 600, + "\u012f": 600, + "\u0130": 600, + "\u0131": 600, + "\u0136": 600, + "\u0137": 600, + "\u0139": 600, + "\u013a": 600, + "\u013b": 600, + "\u013c": 600, + "\u013d": 600, + "\u013e": 600, + "\u0141": 600, + "\u0142": 600, + "\u0143": 600, + "\u0144": 600, + "\u0145": 600, + "\u0146": 600, + "\u0147": 600, + "\u0148": 600, + "\u014c": 600, + "\u014d": 600, + "\u0150": 600, + "\u0151": 600, + "\u0152": 600, + "\u0153": 600, + "\u0154": 600, + "\u0155": 600, + "\u0156": 600, + "\u0157": 600, + "\u0158": 600, + "\u0159": 600, + "\u015a": 600, + "\u015b": 600, + "\u015e": 600, + "\u015f": 600, + "\u0160": 600, + "\u0161": 600, + "\u0162": 600, + "\u0163": 600, + "\u0164": 600, + "\u0165": 600, + "\u016a": 600, + "\u016b": 600, + "\u016e": 600, + "\u016f": 600, + "\u0170": 600, + "\u0171": 600, + "\u0172": 600, + "\u0173": 600, + "\u0178": 600, + "\u0179": 600, + "\u017a": 600, + "\u017b": 600, + "\u017c": 600, + "\u017d": 600, + "\u017e": 600, + "\u0192": 600, + "\u0218": 600, + "\u0219": 600, + "\u02c6": 600, + "\u02c7": 600, + "\u02d8": 600, + "\u02d9": 600, + "\u02da": 600, + "\u02db": 600, + "\u02dc": 600, + "\u02dd": 600, + "\u2013": 600, + "\u2014": 600, + "\u2018": 600, + "\u2019": 600, + "\u201a": 600, + "\u201c": 600, + "\u201d": 600, + "\u201e": 600, + "\u2020": 600, + "\u2021": 600, + "\u2022": 600, + "\u2026": 600, + "\u2030": 600, + "\u2039": 600, + "\u203a": 600, + "\u2044": 600, + "\u2122": 600, + "\u2202": 600, + "\u2206": 600, + "\u2211": 600, + "\u2212": 600, + "\u221a": 600, + "\u2260": 600, + "\u2264": 600, + "\u2265": 600, + "\u25ca": 600, + "\uf6c3": 600, + "\ufb01": 600, + "\ufb02": 600, + }, + ), + "Courier-Oblique": ( + { + "FontName": "Courier-Oblique", + "Descent": -194.0, + "FontBBox": (-49.0, -249.0, 749.0, 803.0), + "FontWeight": "Medium", + "CapHeight": 572.0, + "FontFamily": "Courier", + "Flags": 64, + "XHeight": 434.0, + "ItalicAngle": -11.0, + "Ascent": 627.0, + }, + { + " ": 600, + "!": 600, + '"': 600, + "#": 600, + "$": 600, + "%": 600, + "&": 600, + "'": 600, + "(": 600, + ")": 600, + "*": 600, + "+": 600, + ",": 600, + "-": 600, + ".": 600, + "/": 600, + "0": 600, + "1": 600, + "2": 600, + "3": 600, + "4": 600, + "5": 600, + "6": 600, + "7": 600, + "8": 600, + "9": 600, + ":": 600, + ";": 600, + "<": 600, + "=": 600, + ">": 600, + "?": 600, + "@": 600, + "A": 600, + "B": 600, + "C": 600, + "D": 600, + "E": 600, + "F": 600, + "G": 600, + "H": 600, + "I": 600, + "J": 600, + "K": 600, + "L": 600, + "M": 600, + "N": 600, + "O": 600, + "P": 600, + "Q": 600, + "R": 600, + "S": 600, + "T": 600, + "U": 600, + "V": 600, + "W": 600, + "X": 600, + "Y": 600, + "Z": 600, + "[": 600, + "\\": 600, + "]": 600, + "^": 600, + "_": 600, + "`": 600, + "a": 600, + "b": 600, + "c": 600, + "d": 600, + "e": 600, + "f": 600, + "g": 600, + "h": 600, + "i": 600, + "j": 600, + "k": 600, + "l": 600, + "m": 600, + "n": 600, + "o": 600, + "p": 600, + "q": 600, + "r": 600, + "s": 600, + "t": 600, + "u": 600, + "v": 600, + "w": 600, + "x": 600, + "y": 600, + "z": 600, + "{": 600, + "|": 600, + "}": 600, + "~": 600, + "\xa1": 600, + "\xa2": 600, + "\xa3": 600, + "\xa4": 600, + "\xa5": 600, + "\xa6": 600, + "\xa7": 600, + "\xa8": 600, + "\xa9": 600, + "\xaa": 600, + "\xab": 600, + "\xac": 600, + "\xae": 600, + "\xaf": 600, + "\xb0": 600, + "\xb1": 600, + "\xb2": 600, + "\xb3": 600, + "\xb4": 600, + "\xb5": 600, + "\xb6": 600, + "\xb7": 600, + "\xb8": 600, + "\xb9": 600, + "\xba": 600, + "\xbb": 600, + "\xbc": 600, + "\xbd": 600, + "\xbe": 600, + "\xbf": 600, + "\xc0": 600, + "\xc1": 600, + "\xc2": 600, + "\xc3": 600, + "\xc4": 600, + "\xc5": 600, + "\xc6": 600, + "\xc7": 600, + "\xc8": 600, + "\xc9": 600, + "\xca": 600, + "\xcb": 600, + "\xcc": 600, + "\xcd": 600, + "\xce": 600, + "\xcf": 600, + "\xd0": 600, + "\xd1": 600, + "\xd2": 600, + "\xd3": 600, + "\xd4": 600, + "\xd5": 600, + "\xd6": 600, + "\xd7": 600, + "\xd8": 600, + "\xd9": 600, + "\xda": 600, + "\xdb": 600, + "\xdc": 600, + "\xdd": 600, + "\xde": 600, + "\xdf": 600, + "\xe0": 600, + "\xe1": 600, + "\xe2": 600, + "\xe3": 600, + "\xe4": 600, + "\xe5": 600, + "\xe6": 600, + "\xe7": 600, + "\xe8": 600, + "\xe9": 600, + "\xea": 600, + "\xeb": 600, + "\xec": 600, + "\xed": 600, + "\xee": 600, + "\xef": 600, + "\xf0": 600, + "\xf1": 600, + "\xf2": 600, + "\xf3": 600, + "\xf4": 600, + "\xf5": 600, + "\xf6": 600, + "\xf7": 600, + "\xf8": 600, + "\xf9": 600, + "\xfa": 600, + "\xfb": 600, + "\xfc": 600, + "\xfd": 600, + "\xfe": 600, + "\xff": 600, + "\u0100": 600, + "\u0101": 600, + "\u0102": 600, + "\u0103": 600, + "\u0104": 600, + "\u0105": 600, + "\u0106": 600, + "\u0107": 600, + "\u010c": 600, + "\u010d": 600, + "\u010e": 600, + "\u010f": 600, + "\u0110": 600, + "\u0111": 600, + "\u0112": 600, + "\u0113": 600, + "\u0116": 600, + "\u0117": 600, + "\u0118": 600, + "\u0119": 600, + "\u011a": 600, + "\u011b": 600, + "\u011e": 600, + "\u011f": 600, + "\u0122": 600, + "\u0123": 600, + "\u012a": 600, + "\u012b": 600, + "\u012e": 600, + "\u012f": 600, + "\u0130": 600, + "\u0131": 600, + "\u0136": 600, + "\u0137": 600, + "\u0139": 600, + "\u013a": 600, + "\u013b": 600, + "\u013c": 600, + "\u013d": 600, + "\u013e": 600, + "\u0141": 600, + "\u0142": 600, + "\u0143": 600, + "\u0144": 600, + "\u0145": 600, + "\u0146": 600, + "\u0147": 600, + "\u0148": 600, + "\u014c": 600, + "\u014d": 600, + "\u0150": 600, + "\u0151": 600, + "\u0152": 600, + "\u0153": 600, + "\u0154": 600, + "\u0155": 600, + "\u0156": 600, + "\u0157": 600, + "\u0158": 600, + "\u0159": 600, + "\u015a": 600, + "\u015b": 600, + "\u015e": 600, + "\u015f": 600, + "\u0160": 600, + "\u0161": 600, + "\u0162": 600, + "\u0163": 600, + "\u0164": 600, + "\u0165": 600, + "\u016a": 600, + "\u016b": 600, + "\u016e": 600, + "\u016f": 600, + "\u0170": 600, + "\u0171": 600, + "\u0172": 600, + "\u0173": 600, + "\u0178": 600, + "\u0179": 600, + "\u017a": 600, + "\u017b": 600, + "\u017c": 600, + "\u017d": 600, + "\u017e": 600, + "\u0192": 600, + "\u0218": 600, + "\u0219": 600, + "\u02c6": 600, + "\u02c7": 600, + "\u02d8": 600, + "\u02d9": 600, + "\u02da": 600, + "\u02db": 600, + "\u02dc": 600, + "\u02dd": 600, + "\u2013": 600, + "\u2014": 600, + "\u2018": 600, + "\u2019": 600, + "\u201a": 600, + "\u201c": 600, + "\u201d": 600, + "\u201e": 600, + "\u2020": 600, + "\u2021": 600, + "\u2022": 600, + "\u2026": 600, + "\u2030": 600, + "\u2039": 600, + "\u203a": 600, + "\u2044": 600, + "\u2122": 600, + "\u2202": 600, + "\u2206": 600, + "\u2211": 600, + "\u2212": 600, + "\u221a": 600, + "\u2260": 600, + "\u2264": 600, + "\u2265": 600, + "\u25ca": 600, + "\uf6c3": 600, + "\ufb01": 600, + "\ufb02": 600, + }, + ), + "Helvetica": ( + { + "FontName": "Helvetica", + "Descent": -207.0, + "FontBBox": (-166.0, -225.0, 1000.0, 931.0), + "FontWeight": "Medium", + "CapHeight": 718.0, + "FontFamily": "Helvetica", + "Flags": 0, + "XHeight": 523.0, + "ItalicAngle": 0.0, + "Ascent": 718.0, + }, + { + " ": 278, + "!": 278, + '"': 355, + "#": 556, + "$": 556, + "%": 889, + "&": 667, + "'": 191, + "(": 333, + ")": 333, + "*": 389, + "+": 584, + ",": 278, + "-": 333, + ".": 278, + "/": 278, + "0": 556, + "1": 556, + "2": 556, + "3": 556, + "4": 556, + "5": 556, + "6": 556, + "7": 556, + "8": 556, + "9": 556, + ":": 278, + ";": 278, + "<": 584, + "=": 584, + ">": 584, + "?": 556, + "@": 1015, + "A": 667, + "B": 667, + "C": 722, + "D": 722, + "E": 667, + "F": 611, + "G": 778, + "H": 722, + "I": 278, + "J": 500, + "K": 667, + "L": 556, + "M": 833, + "N": 722, + "O": 778, + "P": 667, + "Q": 778, + "R": 722, + "S": 667, + "T": 611, + "U": 722, + "V": 667, + "W": 944, + "X": 667, + "Y": 667, + "Z": 611, + "[": 278, + "\\": 278, + "]": 278, + "^": 469, + "_": 556, + "`": 333, + "a": 556, + "b": 556, + "c": 500, + "d": 556, + "e": 556, + "f": 278, + "g": 556, + "h": 556, + "i": 222, + "j": 222, + "k": 500, + "l": 222, + "m": 833, + "n": 556, + "o": 556, + "p": 556, + "q": 556, + "r": 333, + "s": 500, + "t": 278, + "u": 556, + "v": 500, + "w": 722, + "x": 500, + "y": 500, + "z": 500, + "{": 334, + "|": 260, + "}": 334, + "~": 584, + "\xa1": 333, + "\xa2": 556, + "\xa3": 556, + "\xa4": 556, + "\xa5": 556, + "\xa6": 260, + "\xa7": 556, + "\xa8": 333, + "\xa9": 737, + "\xaa": 370, + "\xab": 556, + "\xac": 584, + "\xae": 737, + "\xaf": 333, + "\xb0": 400, + "\xb1": 584, + "\xb2": 333, + "\xb3": 333, + "\xb4": 333, + "\xb5": 556, + "\xb6": 537, + "\xb7": 278, + "\xb8": 333, + "\xb9": 333, + "\xba": 365, + "\xbb": 556, + "\xbc": 834, + "\xbd": 834, + "\xbe": 834, + "\xbf": 611, + "\xc0": 667, + "\xc1": 667, + "\xc2": 667, + "\xc3": 667, + "\xc4": 667, + "\xc5": 667, + "\xc6": 1000, + "\xc7": 722, + "\xc8": 667, + "\xc9": 667, + "\xca": 667, + "\xcb": 667, + "\xcc": 278, + "\xcd": 278, + "\xce": 278, + "\xcf": 278, + "\xd0": 722, + "\xd1": 722, + "\xd2": 778, + "\xd3": 778, + "\xd4": 778, + "\xd5": 778, + "\xd6": 778, + "\xd7": 584, + "\xd8": 778, + "\xd9": 722, + "\xda": 722, + "\xdb": 722, + "\xdc": 722, + "\xdd": 667, + "\xde": 667, + "\xdf": 611, + "\xe0": 556, + "\xe1": 556, + "\xe2": 556, + "\xe3": 556, + "\xe4": 556, + "\xe5": 556, + "\xe6": 889, + "\xe7": 500, + "\xe8": 556, + "\xe9": 556, + "\xea": 556, + "\xeb": 556, + "\xec": 278, + "\xed": 278, + "\xee": 278, + "\xef": 278, + "\xf0": 556, + "\xf1": 556, + "\xf2": 556, + "\xf3": 556, + "\xf4": 556, + "\xf5": 556, + "\xf6": 556, + "\xf7": 584, + "\xf8": 611, + "\xf9": 556, + "\xfa": 556, + "\xfb": 556, + "\xfc": 556, + "\xfd": 500, + "\xfe": 556, + "\xff": 500, + "\u0100": 667, + "\u0101": 556, + "\u0102": 667, + "\u0103": 556, + "\u0104": 667, + "\u0105": 556, + "\u0106": 722, + "\u0107": 500, + "\u010c": 722, + "\u010d": 500, + "\u010e": 722, + "\u010f": 643, + "\u0110": 722, + "\u0111": 556, + "\u0112": 667, + "\u0113": 556, + "\u0116": 667, + "\u0117": 556, + "\u0118": 667, + "\u0119": 556, + "\u011a": 667, + "\u011b": 556, + "\u011e": 778, + "\u011f": 556, + "\u0122": 778, + "\u0123": 556, + "\u012a": 278, + "\u012b": 278, + "\u012e": 278, + "\u012f": 222, + "\u0130": 278, + "\u0131": 278, + "\u0136": 667, + "\u0137": 500, + "\u0139": 556, + "\u013a": 222, + "\u013b": 556, + "\u013c": 222, + "\u013d": 556, + "\u013e": 299, + "\u0141": 556, + "\u0142": 222, + "\u0143": 722, + "\u0144": 556, + "\u0145": 722, + "\u0146": 556, + "\u0147": 722, + "\u0148": 556, + "\u014c": 778, + "\u014d": 556, + "\u0150": 778, + "\u0151": 556, + "\u0152": 1000, + "\u0153": 944, + "\u0154": 722, + "\u0155": 333, + "\u0156": 722, + "\u0157": 333, + "\u0158": 722, + "\u0159": 333, + "\u015a": 667, + "\u015b": 500, + "\u015e": 667, + "\u015f": 500, + "\u0160": 667, + "\u0161": 500, + "\u0162": 611, + "\u0163": 278, + "\u0164": 611, + "\u0165": 317, + "\u016a": 722, + "\u016b": 556, + "\u016e": 722, + "\u016f": 556, + "\u0170": 722, + "\u0171": 556, + "\u0172": 722, + "\u0173": 556, + "\u0178": 667, + "\u0179": 611, + "\u017a": 500, + "\u017b": 611, + "\u017c": 500, + "\u017d": 611, + "\u017e": 500, + "\u0192": 556, + "\u0218": 667, + "\u0219": 500, + "\u02c6": 333, + "\u02c7": 333, + "\u02d8": 333, + "\u02d9": 333, + "\u02da": 333, + "\u02db": 333, + "\u02dc": 333, + "\u02dd": 333, + "\u2013": 556, + "\u2014": 1000, + "\u2018": 222, + "\u2019": 222, + "\u201a": 222, + "\u201c": 333, + "\u201d": 333, + "\u201e": 333, + "\u2020": 556, + "\u2021": 556, + "\u2022": 350, + "\u2026": 1000, + "\u2030": 1000, + "\u2039": 333, + "\u203a": 333, + "\u2044": 167, + "\u2122": 1000, + "\u2202": 476, + "\u2206": 612, + "\u2211": 600, + "\u2212": 584, + "\u221a": 453, + "\u2260": 549, + "\u2264": 549, + "\u2265": 549, + "\u25ca": 471, + "\uf6c3": 250, + "\ufb01": 500, + "\ufb02": 500, + }, + ), + "Helvetica-Bold": ( + { + "FontName": "Helvetica-Bold", + "Descent": -207.0, + "FontBBox": (-170.0, -228.0, 1003.0, 962.0), + "FontWeight": "Bold", + "CapHeight": 718.0, + "FontFamily": "Helvetica", + "Flags": 0, + "XHeight": 532.0, + "ItalicAngle": 0.0, + "Ascent": 718.0, + }, + { + " ": 278, + "!": 333, + '"': 474, + "#": 556, + "$": 556, + "%": 889, + "&": 722, + "'": 238, + "(": 333, + ")": 333, + "*": 389, + "+": 584, + ",": 278, + "-": 333, + ".": 278, + "/": 278, + "0": 556, + "1": 556, + "2": 556, + "3": 556, + "4": 556, + "5": 556, + "6": 556, + "7": 556, + "8": 556, + "9": 556, + ":": 333, + ";": 333, + "<": 584, + "=": 584, + ">": 584, + "?": 611, + "@": 975, + "A": 722, + "B": 722, + "C": 722, + "D": 722, + "E": 667, + "F": 611, + "G": 778, + "H": 722, + "I": 278, + "J": 556, + "K": 722, + "L": 611, + "M": 833, + "N": 722, + "O": 778, + "P": 667, + "Q": 778, + "R": 722, + "S": 667, + "T": 611, + "U": 722, + "V": 667, + "W": 944, + "X": 667, + "Y": 667, + "Z": 611, + "[": 333, + "\\": 278, + "]": 333, + "^": 584, + "_": 556, + "`": 333, + "a": 556, + "b": 611, + "c": 556, + "d": 611, + "e": 556, + "f": 333, + "g": 611, + "h": 611, + "i": 278, + "j": 278, + "k": 556, + "l": 278, + "m": 889, + "n": 611, + "o": 611, + "p": 611, + "q": 611, + "r": 389, + "s": 556, + "t": 333, + "u": 611, + "v": 556, + "w": 778, + "x": 556, + "y": 556, + "z": 500, + "{": 389, + "|": 280, + "}": 389, + "~": 584, + "\xa1": 333, + "\xa2": 556, + "\xa3": 556, + "\xa4": 556, + "\xa5": 556, + "\xa6": 280, + "\xa7": 556, + "\xa8": 333, + "\xa9": 737, + "\xaa": 370, + "\xab": 556, + "\xac": 584, + "\xae": 737, + "\xaf": 333, + "\xb0": 400, + "\xb1": 584, + "\xb2": 333, + "\xb3": 333, + "\xb4": 333, + "\xb5": 611, + "\xb6": 556, + "\xb7": 278, + "\xb8": 333, + "\xb9": 333, + "\xba": 365, + "\xbb": 556, + "\xbc": 834, + "\xbd": 834, + "\xbe": 834, + "\xbf": 611, + "\xc0": 722, + "\xc1": 722, + "\xc2": 722, + "\xc3": 722, + "\xc4": 722, + "\xc5": 722, + "\xc6": 1000, + "\xc7": 722, + "\xc8": 667, + "\xc9": 667, + "\xca": 667, + "\xcb": 667, + "\xcc": 278, + "\xcd": 278, + "\xce": 278, + "\xcf": 278, + "\xd0": 722, + "\xd1": 722, + "\xd2": 778, + "\xd3": 778, + "\xd4": 778, + "\xd5": 778, + "\xd6": 778, + "\xd7": 584, + "\xd8": 778, + "\xd9": 722, + "\xda": 722, + "\xdb": 722, + "\xdc": 722, + "\xdd": 667, + "\xde": 667, + "\xdf": 611, + "\xe0": 556, + "\xe1": 556, + "\xe2": 556, + "\xe3": 556, + "\xe4": 556, + "\xe5": 556, + "\xe6": 889, + "\xe7": 556, + "\xe8": 556, + "\xe9": 556, + "\xea": 556, + "\xeb": 556, + "\xec": 278, + "\xed": 278, + "\xee": 278, + "\xef": 278, + "\xf0": 611, + "\xf1": 611, + "\xf2": 611, + "\xf3": 611, + "\xf4": 611, + "\xf5": 611, + "\xf6": 611, + "\xf7": 584, + "\xf8": 611, + "\xf9": 611, + "\xfa": 611, + "\xfb": 611, + "\xfc": 611, + "\xfd": 556, + "\xfe": 611, + "\xff": 556, + "\u0100": 722, + "\u0101": 556, + "\u0102": 722, + "\u0103": 556, + "\u0104": 722, + "\u0105": 556, + "\u0106": 722, + "\u0107": 556, + "\u010c": 722, + "\u010d": 556, + "\u010e": 722, + "\u010f": 743, + "\u0110": 722, + "\u0111": 611, + "\u0112": 667, + "\u0113": 556, + "\u0116": 667, + "\u0117": 556, + "\u0118": 667, + "\u0119": 556, + "\u011a": 667, + "\u011b": 556, + "\u011e": 778, + "\u011f": 611, + "\u0122": 778, + "\u0123": 611, + "\u012a": 278, + "\u012b": 278, + "\u012e": 278, + "\u012f": 278, + "\u0130": 278, + "\u0131": 278, + "\u0136": 722, + "\u0137": 556, + "\u0139": 611, + "\u013a": 278, + "\u013b": 611, + "\u013c": 278, + "\u013d": 611, + "\u013e": 400, + "\u0141": 611, + "\u0142": 278, + "\u0143": 722, + "\u0144": 611, + "\u0145": 722, + "\u0146": 611, + "\u0147": 722, + "\u0148": 611, + "\u014c": 778, + "\u014d": 611, + "\u0150": 778, + "\u0151": 611, + "\u0152": 1000, + "\u0153": 944, + "\u0154": 722, + "\u0155": 389, + "\u0156": 722, + "\u0157": 389, + "\u0158": 722, + "\u0159": 389, + "\u015a": 667, + "\u015b": 556, + "\u015e": 667, + "\u015f": 556, + "\u0160": 667, + "\u0161": 556, + "\u0162": 611, + "\u0163": 333, + "\u0164": 611, + "\u0165": 389, + "\u016a": 722, + "\u016b": 611, + "\u016e": 722, + "\u016f": 611, + "\u0170": 722, + "\u0171": 611, + "\u0172": 722, + "\u0173": 611, + "\u0178": 667, + "\u0179": 611, + "\u017a": 500, + "\u017b": 611, + "\u017c": 500, + "\u017d": 611, + "\u017e": 500, + "\u0192": 556, + "\u0218": 667, + "\u0219": 556, + "\u02c6": 333, + "\u02c7": 333, + "\u02d8": 333, + "\u02d9": 333, + "\u02da": 333, + "\u02db": 333, + "\u02dc": 333, + "\u02dd": 333, + "\u2013": 556, + "\u2014": 1000, + "\u2018": 278, + "\u2019": 278, + "\u201a": 278, + "\u201c": 500, + "\u201d": 500, + "\u201e": 500, + "\u2020": 556, + "\u2021": 556, + "\u2022": 350, + "\u2026": 1000, + "\u2030": 1000, + "\u2039": 333, + "\u203a": 333, + "\u2044": 167, + "\u2122": 1000, + "\u2202": 494, + "\u2206": 612, + "\u2211": 600, + "\u2212": 584, + "\u221a": 549, + "\u2260": 549, + "\u2264": 549, + "\u2265": 549, + "\u25ca": 494, + "\uf6c3": 250, + "\ufb01": 611, + "\ufb02": 611, + }, + ), + "Helvetica-BoldOblique": ( + { + "FontName": "Helvetica-BoldOblique", + "Descent": -207.0, + "FontBBox": (-175.0, -228.0, 1114.0, 962.0), + "FontWeight": "Bold", + "CapHeight": 718.0, + "FontFamily": "Helvetica", + "Flags": 0, + "XHeight": 532.0, + "ItalicAngle": -12.0, + "Ascent": 718.0, + }, + { + " ": 278, + "!": 333, + '"': 474, + "#": 556, + "$": 556, + "%": 889, + "&": 722, + "'": 238, + "(": 333, + ")": 333, + "*": 389, + "+": 584, + ",": 278, + "-": 333, + ".": 278, + "/": 278, + "0": 556, + "1": 556, + "2": 556, + "3": 556, + "4": 556, + "5": 556, + "6": 556, + "7": 556, + "8": 556, + "9": 556, + ":": 333, + ";": 333, + "<": 584, + "=": 584, + ">": 584, + "?": 611, + "@": 975, + "A": 722, + "B": 722, + "C": 722, + "D": 722, + "E": 667, + "F": 611, + "G": 778, + "H": 722, + "I": 278, + "J": 556, + "K": 722, + "L": 611, + "M": 833, + "N": 722, + "O": 778, + "P": 667, + "Q": 778, + "R": 722, + "S": 667, + "T": 611, + "U": 722, + "V": 667, + "W": 944, + "X": 667, + "Y": 667, + "Z": 611, + "[": 333, + "\\": 278, + "]": 333, + "^": 584, + "_": 556, + "`": 333, + "a": 556, + "b": 611, + "c": 556, + "d": 611, + "e": 556, + "f": 333, + "g": 611, + "h": 611, + "i": 278, + "j": 278, + "k": 556, + "l": 278, + "m": 889, + "n": 611, + "o": 611, + "p": 611, + "q": 611, + "r": 389, + "s": 556, + "t": 333, + "u": 611, + "v": 556, + "w": 778, + "x": 556, + "y": 556, + "z": 500, + "{": 389, + "|": 280, + "}": 389, + "~": 584, + "\xa1": 333, + "\xa2": 556, + "\xa3": 556, + "\xa4": 556, + "\xa5": 556, + "\xa6": 280, + "\xa7": 556, + "\xa8": 333, + "\xa9": 737, + "\xaa": 370, + "\xab": 556, + "\xac": 584, + "\xae": 737, + "\xaf": 333, + "\xb0": 400, + "\xb1": 584, + "\xb2": 333, + "\xb3": 333, + "\xb4": 333, + "\xb5": 611, + "\xb6": 556, + "\xb7": 278, + "\xb8": 333, + "\xb9": 333, + "\xba": 365, + "\xbb": 556, + "\xbc": 834, + "\xbd": 834, + "\xbe": 834, + "\xbf": 611, + "\xc0": 722, + "\xc1": 722, + "\xc2": 722, + "\xc3": 722, + "\xc4": 722, + "\xc5": 722, + "\xc6": 1000, + "\xc7": 722, + "\xc8": 667, + "\xc9": 667, + "\xca": 667, + "\xcb": 667, + "\xcc": 278, + "\xcd": 278, + "\xce": 278, + "\xcf": 278, + "\xd0": 722, + "\xd1": 722, + "\xd2": 778, + "\xd3": 778, + "\xd4": 778, + "\xd5": 778, + "\xd6": 778, + "\xd7": 584, + "\xd8": 778, + "\xd9": 722, + "\xda": 722, + "\xdb": 722, + "\xdc": 722, + "\xdd": 667, + "\xde": 667, + "\xdf": 611, + "\xe0": 556, + "\xe1": 556, + "\xe2": 556, + "\xe3": 556, + "\xe4": 556, + "\xe5": 556, + "\xe6": 889, + "\xe7": 556, + "\xe8": 556, + "\xe9": 556, + "\xea": 556, + "\xeb": 556, + "\xec": 278, + "\xed": 278, + "\xee": 278, + "\xef": 278, + "\xf0": 611, + "\xf1": 611, + "\xf2": 611, + "\xf3": 611, + "\xf4": 611, + "\xf5": 611, + "\xf6": 611, + "\xf7": 584, + "\xf8": 611, + "\xf9": 611, + "\xfa": 611, + "\xfb": 611, + "\xfc": 611, + "\xfd": 556, + "\xfe": 611, + "\xff": 556, + "\u0100": 722, + "\u0101": 556, + "\u0102": 722, + "\u0103": 556, + "\u0104": 722, + "\u0105": 556, + "\u0106": 722, + "\u0107": 556, + "\u010c": 722, + "\u010d": 556, + "\u010e": 722, + "\u010f": 743, + "\u0110": 722, + "\u0111": 611, + "\u0112": 667, + "\u0113": 556, + "\u0116": 667, + "\u0117": 556, + "\u0118": 667, + "\u0119": 556, + "\u011a": 667, + "\u011b": 556, + "\u011e": 778, + "\u011f": 611, + "\u0122": 778, + "\u0123": 611, + "\u012a": 278, + "\u012b": 278, + "\u012e": 278, + "\u012f": 278, + "\u0130": 278, + "\u0131": 278, + "\u0136": 722, + "\u0137": 556, + "\u0139": 611, + "\u013a": 278, + "\u013b": 611, + "\u013c": 278, + "\u013d": 611, + "\u013e": 400, + "\u0141": 611, + "\u0142": 278, + "\u0143": 722, + "\u0144": 611, + "\u0145": 722, + "\u0146": 611, + "\u0147": 722, + "\u0148": 611, + "\u014c": 778, + "\u014d": 611, + "\u0150": 778, + "\u0151": 611, + "\u0152": 1000, + "\u0153": 944, + "\u0154": 722, + "\u0155": 389, + "\u0156": 722, + "\u0157": 389, + "\u0158": 722, + "\u0159": 389, + "\u015a": 667, + "\u015b": 556, + "\u015e": 667, + "\u015f": 556, + "\u0160": 667, + "\u0161": 556, + "\u0162": 611, + "\u0163": 333, + "\u0164": 611, + "\u0165": 389, + "\u016a": 722, + "\u016b": 611, + "\u016e": 722, + "\u016f": 611, + "\u0170": 722, + "\u0171": 611, + "\u0172": 722, + "\u0173": 611, + "\u0178": 667, + "\u0179": 611, + "\u017a": 500, + "\u017b": 611, + "\u017c": 500, + "\u017d": 611, + "\u017e": 500, + "\u0192": 556, + "\u0218": 667, + "\u0219": 556, + "\u02c6": 333, + "\u02c7": 333, + "\u02d8": 333, + "\u02d9": 333, + "\u02da": 333, + "\u02db": 333, + "\u02dc": 333, + "\u02dd": 333, + "\u2013": 556, + "\u2014": 1000, + "\u2018": 278, + "\u2019": 278, + "\u201a": 278, + "\u201c": 500, + "\u201d": 500, + "\u201e": 500, + "\u2020": 556, + "\u2021": 556, + "\u2022": 350, + "\u2026": 1000, + "\u2030": 1000, + "\u2039": 333, + "\u203a": 333, + "\u2044": 167, + "\u2122": 1000, + "\u2202": 494, + "\u2206": 612, + "\u2211": 600, + "\u2212": 584, + "\u221a": 549, + "\u2260": 549, + "\u2264": 549, + "\u2265": 549, + "\u25ca": 494, + "\uf6c3": 250, + "\ufb01": 611, + "\ufb02": 611, + }, + ), + "Helvetica-Oblique": ( + { + "FontName": "Helvetica-Oblique", + "Descent": -207.0, + "FontBBox": (-171.0, -225.0, 1116.0, 931.0), + "FontWeight": "Medium", + "CapHeight": 718.0, + "FontFamily": "Helvetica", + "Flags": 0, + "XHeight": 523.0, + "ItalicAngle": -12.0, + "Ascent": 718.0, + }, + { + " ": 278, + "!": 278, + '"': 355, + "#": 556, + "$": 556, + "%": 889, + "&": 667, + "'": 191, + "(": 333, + ")": 333, + "*": 389, + "+": 584, + ",": 278, + "-": 333, + ".": 278, + "/": 278, + "0": 556, + "1": 556, + "2": 556, + "3": 556, + "4": 556, + "5": 556, + "6": 556, + "7": 556, + "8": 556, + "9": 556, + ":": 278, + ";": 278, + "<": 584, + "=": 584, + ">": 584, + "?": 556, + "@": 1015, + "A": 667, + "B": 667, + "C": 722, + "D": 722, + "E": 667, + "F": 611, + "G": 778, + "H": 722, + "I": 278, + "J": 500, + "K": 667, + "L": 556, + "M": 833, + "N": 722, + "O": 778, + "P": 667, + "Q": 778, + "R": 722, + "S": 667, + "T": 611, + "U": 722, + "V": 667, + "W": 944, + "X": 667, + "Y": 667, + "Z": 611, + "[": 278, + "\\": 278, + "]": 278, + "^": 469, + "_": 556, + "`": 333, + "a": 556, + "b": 556, + "c": 500, + "d": 556, + "e": 556, + "f": 278, + "g": 556, + "h": 556, + "i": 222, + "j": 222, + "k": 500, + "l": 222, + "m": 833, + "n": 556, + "o": 556, + "p": 556, + "q": 556, + "r": 333, + "s": 500, + "t": 278, + "u": 556, + "v": 500, + "w": 722, + "x": 500, + "y": 500, + "z": 500, + "{": 334, + "|": 260, + "}": 334, + "~": 584, + "\xa1": 333, + "\xa2": 556, + "\xa3": 556, + "\xa4": 556, + "\xa5": 556, + "\xa6": 260, + "\xa7": 556, + "\xa8": 333, + "\xa9": 737, + "\xaa": 370, + "\xab": 556, + "\xac": 584, + "\xae": 737, + "\xaf": 333, + "\xb0": 400, + "\xb1": 584, + "\xb2": 333, + "\xb3": 333, + "\xb4": 333, + "\xb5": 556, + "\xb6": 537, + "\xb7": 278, + "\xb8": 333, + "\xb9": 333, + "\xba": 365, + "\xbb": 556, + "\xbc": 834, + "\xbd": 834, + "\xbe": 834, + "\xbf": 611, + "\xc0": 667, + "\xc1": 667, + "\xc2": 667, + "\xc3": 667, + "\xc4": 667, + "\xc5": 667, + "\xc6": 1000, + "\xc7": 722, + "\xc8": 667, + "\xc9": 667, + "\xca": 667, + "\xcb": 667, + "\xcc": 278, + "\xcd": 278, + "\xce": 278, + "\xcf": 278, + "\xd0": 722, + "\xd1": 722, + "\xd2": 778, + "\xd3": 778, + "\xd4": 778, + "\xd5": 778, + "\xd6": 778, + "\xd7": 584, + "\xd8": 778, + "\xd9": 722, + "\xda": 722, + "\xdb": 722, + "\xdc": 722, + "\xdd": 667, + "\xde": 667, + "\xdf": 611, + "\xe0": 556, + "\xe1": 556, + "\xe2": 556, + "\xe3": 556, + "\xe4": 556, + "\xe5": 556, + "\xe6": 889, + "\xe7": 500, + "\xe8": 556, + "\xe9": 556, + "\xea": 556, + "\xeb": 556, + "\xec": 278, + "\xed": 278, + "\xee": 278, + "\xef": 278, + "\xf0": 556, + "\xf1": 556, + "\xf2": 556, + "\xf3": 556, + "\xf4": 556, + "\xf5": 556, + "\xf6": 556, + "\xf7": 584, + "\xf8": 611, + "\xf9": 556, + "\xfa": 556, + "\xfb": 556, + "\xfc": 556, + "\xfd": 500, + "\xfe": 556, + "\xff": 500, + "\u0100": 667, + "\u0101": 556, + "\u0102": 667, + "\u0103": 556, + "\u0104": 667, + "\u0105": 556, + "\u0106": 722, + "\u0107": 500, + "\u010c": 722, + "\u010d": 500, + "\u010e": 722, + "\u010f": 643, + "\u0110": 722, + "\u0111": 556, + "\u0112": 667, + "\u0113": 556, + "\u0116": 667, + "\u0117": 556, + "\u0118": 667, + "\u0119": 556, + "\u011a": 667, + "\u011b": 556, + "\u011e": 778, + "\u011f": 556, + "\u0122": 778, + "\u0123": 556, + "\u012a": 278, + "\u012b": 278, + "\u012e": 278, + "\u012f": 222, + "\u0130": 278, + "\u0131": 278, + "\u0136": 667, + "\u0137": 500, + "\u0139": 556, + "\u013a": 222, + "\u013b": 556, + "\u013c": 222, + "\u013d": 556, + "\u013e": 299, + "\u0141": 556, + "\u0142": 222, + "\u0143": 722, + "\u0144": 556, + "\u0145": 722, + "\u0146": 556, + "\u0147": 722, + "\u0148": 556, + "\u014c": 778, + "\u014d": 556, + "\u0150": 778, + "\u0151": 556, + "\u0152": 1000, + "\u0153": 944, + "\u0154": 722, + "\u0155": 333, + "\u0156": 722, + "\u0157": 333, + "\u0158": 722, + "\u0159": 333, + "\u015a": 667, + "\u015b": 500, + "\u015e": 667, + "\u015f": 500, + "\u0160": 667, + "\u0161": 500, + "\u0162": 611, + "\u0163": 278, + "\u0164": 611, + "\u0165": 317, + "\u016a": 722, + "\u016b": 556, + "\u016e": 722, + "\u016f": 556, + "\u0170": 722, + "\u0171": 556, + "\u0172": 722, + "\u0173": 556, + "\u0178": 667, + "\u0179": 611, + "\u017a": 500, + "\u017b": 611, + "\u017c": 500, + "\u017d": 611, + "\u017e": 500, + "\u0192": 556, + "\u0218": 667, + "\u0219": 500, + "\u02c6": 333, + "\u02c7": 333, + "\u02d8": 333, + "\u02d9": 333, + "\u02da": 333, + "\u02db": 333, + "\u02dc": 333, + "\u02dd": 333, + "\u2013": 556, + "\u2014": 1000, + "\u2018": 222, + "\u2019": 222, + "\u201a": 222, + "\u201c": 333, + "\u201d": 333, + "\u201e": 333, + "\u2020": 556, + "\u2021": 556, + "\u2022": 350, + "\u2026": 1000, + "\u2030": 1000, + "\u2039": 333, + "\u203a": 333, + "\u2044": 167, + "\u2122": 1000, + "\u2202": 476, + "\u2206": 612, + "\u2211": 600, + "\u2212": 584, + "\u221a": 453, + "\u2260": 549, + "\u2264": 549, + "\u2265": 549, + "\u25ca": 471, + "\uf6c3": 250, + "\ufb01": 500, + "\ufb02": 500, + }, + ), + "Symbol": ( + { + "FontName": "Symbol", + "FontBBox": (-180.0, -293.0, 1090.0, 1010.0), + "FontWeight": "Medium", + "FontFamily": "Symbol", + "Flags": 0, + "ItalicAngle": 0.0, + }, + { + " ": 250, + "!": 333, + "#": 500, + "%": 833, + "&": 778, + "(": 333, + ")": 333, + "+": 549, + ",": 250, + ".": 250, + "/": 278, + "0": 500, + "1": 500, + "2": 500, + "3": 500, + "4": 500, + "5": 500, + "6": 500, + "7": 500, + "8": 500, + "9": 500, + ":": 278, + ";": 278, + "<": 549, + "=": 549, + ">": 549, + "?": 444, + "[": 333, + "]": 333, + "_": 500, + "{": 480, + "|": 200, + "}": 480, + "\xac": 713, + "\xb0": 400, + "\xb1": 549, + "\xb5": 576, + "\xd7": 549, + "\xf7": 549, + "\u0192": 500, + "\u0391": 722, + "\u0392": 667, + "\u0393": 603, + "\u0395": 611, + "\u0396": 611, + "\u0397": 722, + "\u0398": 741, + "\u0399": 333, + "\u039a": 722, + "\u039b": 686, + "\u039c": 889, + "\u039d": 722, + "\u039e": 645, + "\u039f": 722, + "\u03a0": 768, + "\u03a1": 556, + "\u03a3": 592, + "\u03a4": 611, + "\u03a5": 690, + "\u03a6": 763, + "\u03a7": 722, + "\u03a8": 795, + "\u03b1": 631, + "\u03b2": 549, + "\u03b3": 411, + "\u03b4": 494, + "\u03b5": 439, + "\u03b6": 494, + "\u03b7": 603, + "\u03b8": 521, + "\u03b9": 329, + "\u03ba": 549, + "\u03bb": 549, + "\u03bd": 521, + "\u03be": 493, + "\u03bf": 549, + "\u03c0": 549, + "\u03c1": 549, + "\u03c2": 439, + "\u03c3": 603, + "\u03c4": 439, + "\u03c5": 576, + "\u03c6": 521, + "\u03c7": 549, + "\u03c8": 686, + "\u03c9": 686, + "\u03d1": 631, + "\u03d2": 620, + "\u03d5": 603, + "\u03d6": 713, + "\u2022": 460, + "\u2026": 1000, + "\u2032": 247, + "\u2033": 411, + "\u2044": 167, + "\u20ac": 750, + "\u2111": 686, + "\u2118": 987, + "\u211c": 795, + "\u2126": 768, + "\u2135": 823, + "\u2190": 987, + "\u2191": 603, + "\u2192": 987, + "\u2193": 603, + "\u2194": 1042, + "\u21b5": 658, + "\u21d0": 987, + "\u21d1": 603, + "\u21d2": 987, + "\u21d3": 603, + "\u21d4": 1042, + "\u2200": 713, + "\u2202": 494, + "\u2203": 549, + "\u2205": 823, + "\u2206": 612, + "\u2207": 713, + "\u2208": 713, + "\u2209": 713, + "\u220b": 439, + "\u220f": 823, + "\u2211": 713, + "\u2212": 549, + "\u2217": 500, + "\u221a": 549, + "\u221d": 713, + "\u221e": 713, + "\u2220": 768, + "\u2227": 603, + "\u2228": 603, + "\u2229": 768, + "\u222a": 768, + "\u222b": 274, + "\u2234": 863, + "\u223c": 549, + "\u2245": 549, + "\u2248": 549, + "\u2260": 549, + "\u2261": 549, + "\u2264": 549, + "\u2265": 549, + "\u2282": 713, + "\u2283": 713, + "\u2284": 713, + "\u2286": 713, + "\u2287": 713, + "\u2295": 768, + "\u2297": 768, + "\u22a5": 658, + "\u22c5": 250, + "\u2320": 686, + "\u2321": 686, + "\u2329": 329, + "\u232a": 329, + "\u25ca": 494, + "\u2660": 753, + "\u2663": 753, + "\u2665": 753, + "\u2666": 753, + "\uf6d9": 790, + "\uf6da": 790, + "\uf6db": 890, + "\uf8e5": 500, + "\uf8e6": 603, + "\uf8e7": 1000, + "\uf8e8": 790, + "\uf8e9": 790, + "\uf8ea": 786, + "\uf8eb": 384, + "\uf8ec": 384, + "\uf8ed": 384, + "\uf8ee": 384, + "\uf8ef": 384, + "\uf8f0": 384, + "\uf8f1": 494, + "\uf8f2": 494, + "\uf8f3": 494, + "\uf8f4": 494, + "\uf8f5": 686, + "\uf8f6": 384, + "\uf8f7": 384, + "\uf8f8": 384, + "\uf8f9": 384, + "\uf8fa": 384, + "\uf8fb": 384, + "\uf8fc": 494, + "\uf8fd": 494, + "\uf8fe": 494, + "\uf8ff": 790, + }, + ), + "Times-Bold": ( + { + "FontName": "Times-Bold", + "Descent": -217.0, + "FontBBox": (-168.0, -218.0, 1000.0, 935.0), + "FontWeight": "Bold", + "CapHeight": 676.0, + "FontFamily": "Times", + "Flags": 0, + "XHeight": 461.0, + "ItalicAngle": 0.0, + "Ascent": 683.0, + }, + { + " ": 250, + "!": 333, + '"': 555, + "#": 500, + "$": 500, + "%": 1000, + "&": 833, + "'": 278, + "(": 333, + ")": 333, + "*": 500, + "+": 570, + ",": 250, + "-": 333, + ".": 250, + "/": 278, + "0": 500, + "1": 500, + "2": 500, + "3": 500, + "4": 500, + "5": 500, + "6": 500, + "7": 500, + "8": 500, + "9": 500, + ":": 333, + ";": 333, + "<": 570, + "=": 570, + ">": 570, + "?": 500, + "@": 930, + "A": 722, + "B": 667, + "C": 722, + "D": 722, + "E": 667, + "F": 611, + "G": 778, + "H": 778, + "I": 389, + "J": 500, + "K": 778, + "L": 667, + "M": 944, + "N": 722, + "O": 778, + "P": 611, + "Q": 778, + "R": 722, + "S": 556, + "T": 667, + "U": 722, + "V": 722, + "W": 1000, + "X": 722, + "Y": 722, + "Z": 667, + "[": 333, + "\\": 278, + "]": 333, + "^": 581, + "_": 500, + "`": 333, + "a": 500, + "b": 556, + "c": 444, + "d": 556, + "e": 444, + "f": 333, + "g": 500, + "h": 556, + "i": 278, + "j": 333, + "k": 556, + "l": 278, + "m": 833, + "n": 556, + "o": 500, + "p": 556, + "q": 556, + "r": 444, + "s": 389, + "t": 333, + "u": 556, + "v": 500, + "w": 722, + "x": 500, + "y": 500, + "z": 444, + "{": 394, + "|": 220, + "}": 394, + "~": 520, + "\xa1": 333, + "\xa2": 500, + "\xa3": 500, + "\xa4": 500, + "\xa5": 500, + "\xa6": 220, + "\xa7": 500, + "\xa8": 333, + "\xa9": 747, + "\xaa": 300, + "\xab": 500, + "\xac": 570, + "\xae": 747, + "\xaf": 333, + "\xb0": 400, + "\xb1": 570, + "\xb2": 300, + "\xb3": 300, + "\xb4": 333, + "\xb5": 556, + "\xb6": 540, + "\xb7": 250, + "\xb8": 333, + "\xb9": 300, + "\xba": 330, + "\xbb": 500, + "\xbc": 750, + "\xbd": 750, + "\xbe": 750, + "\xbf": 500, + "\xc0": 722, + "\xc1": 722, + "\xc2": 722, + "\xc3": 722, + "\xc4": 722, + "\xc5": 722, + "\xc6": 1000, + "\xc7": 722, + "\xc8": 667, + "\xc9": 667, + "\xca": 667, + "\xcb": 667, + "\xcc": 389, + "\xcd": 389, + "\xce": 389, + "\xcf": 389, + "\xd0": 722, + "\xd1": 722, + "\xd2": 778, + "\xd3": 778, + "\xd4": 778, + "\xd5": 778, + "\xd6": 778, + "\xd7": 570, + "\xd8": 778, + "\xd9": 722, + "\xda": 722, + "\xdb": 722, + "\xdc": 722, + "\xdd": 722, + "\xde": 611, + "\xdf": 556, + "\xe0": 500, + "\xe1": 500, + "\xe2": 500, + "\xe3": 500, + "\xe4": 500, + "\xe5": 500, + "\xe6": 722, + "\xe7": 444, + "\xe8": 444, + "\xe9": 444, + "\xea": 444, + "\xeb": 444, + "\xec": 278, + "\xed": 278, + "\xee": 278, + "\xef": 278, + "\xf0": 500, + "\xf1": 556, + "\xf2": 500, + "\xf3": 500, + "\xf4": 500, + "\xf5": 500, + "\xf6": 500, + "\xf7": 570, + "\xf8": 500, + "\xf9": 556, + "\xfa": 556, + "\xfb": 556, + "\xfc": 556, + "\xfd": 500, + "\xfe": 556, + "\xff": 500, + "\u0100": 722, + "\u0101": 500, + "\u0102": 722, + "\u0103": 500, + "\u0104": 722, + "\u0105": 500, + "\u0106": 722, + "\u0107": 444, + "\u010c": 722, + "\u010d": 444, + "\u010e": 722, + "\u010f": 672, + "\u0110": 722, + "\u0111": 556, + "\u0112": 667, + "\u0113": 444, + "\u0116": 667, + "\u0117": 444, + "\u0118": 667, + "\u0119": 444, + "\u011a": 667, + "\u011b": 444, + "\u011e": 778, + "\u011f": 500, + "\u0122": 778, + "\u0123": 500, + "\u012a": 389, + "\u012b": 278, + "\u012e": 389, + "\u012f": 278, + "\u0130": 389, + "\u0131": 278, + "\u0136": 778, + "\u0137": 556, + "\u0139": 667, + "\u013a": 278, + "\u013b": 667, + "\u013c": 278, + "\u013d": 667, + "\u013e": 394, + "\u0141": 667, + "\u0142": 278, + "\u0143": 722, + "\u0144": 556, + "\u0145": 722, + "\u0146": 556, + "\u0147": 722, + "\u0148": 556, + "\u014c": 778, + "\u014d": 500, + "\u0150": 778, + "\u0151": 500, + "\u0152": 1000, + "\u0153": 722, + "\u0154": 722, + "\u0155": 444, + "\u0156": 722, + "\u0157": 444, + "\u0158": 722, + "\u0159": 444, + "\u015a": 556, + "\u015b": 389, + "\u015e": 556, + "\u015f": 389, + "\u0160": 556, + "\u0161": 389, + "\u0162": 667, + "\u0163": 333, + "\u0164": 667, + "\u0165": 416, + "\u016a": 722, + "\u016b": 556, + "\u016e": 722, + "\u016f": 556, + "\u0170": 722, + "\u0171": 556, + "\u0172": 722, + "\u0173": 556, + "\u0178": 722, + "\u0179": 667, + "\u017a": 444, + "\u017b": 667, + "\u017c": 444, + "\u017d": 667, + "\u017e": 444, + "\u0192": 500, + "\u0218": 556, + "\u0219": 389, + "\u02c6": 333, + "\u02c7": 333, + "\u02d8": 333, + "\u02d9": 333, + "\u02da": 333, + "\u02db": 333, + "\u02dc": 333, + "\u02dd": 333, + "\u2013": 500, + "\u2014": 1000, + "\u2018": 333, + "\u2019": 333, + "\u201a": 333, + "\u201c": 500, + "\u201d": 500, + "\u201e": 500, + "\u2020": 500, + "\u2021": 500, + "\u2022": 350, + "\u2026": 1000, + "\u2030": 1000, + "\u2039": 333, + "\u203a": 333, + "\u2044": 167, + "\u2122": 1000, + "\u2202": 494, + "\u2206": 612, + "\u2211": 600, + "\u2212": 570, + "\u221a": 549, + "\u2260": 549, + "\u2264": 549, + "\u2265": 549, + "\u25ca": 494, + "\uf6c3": 250, + "\ufb01": 556, + "\ufb02": 556, + }, + ), + "Times-BoldItalic": ( + { + "FontName": "Times-BoldItalic", + "Descent": -217.0, + "FontBBox": (-200.0, -218.0, 996.0, 921.0), + "FontWeight": "Bold", + "CapHeight": 669.0, + "FontFamily": "Times", + "Flags": 0, + "XHeight": 462.0, + "ItalicAngle": -15.0, + "Ascent": 683.0, + }, + { + " ": 250, + "!": 389, + '"': 555, + "#": 500, + "$": 500, + "%": 833, + "&": 778, + "'": 278, + "(": 333, + ")": 333, + "*": 500, + "+": 570, + ",": 250, + "-": 333, + ".": 250, + "/": 278, + "0": 500, + "1": 500, + "2": 500, + "3": 500, + "4": 500, + "5": 500, + "6": 500, + "7": 500, + "8": 500, + "9": 500, + ":": 333, + ";": 333, + "<": 570, + "=": 570, + ">": 570, + "?": 500, + "@": 832, + "A": 667, + "B": 667, + "C": 667, + "D": 722, + "E": 667, + "F": 667, + "G": 722, + "H": 778, + "I": 389, + "J": 500, + "K": 667, + "L": 611, + "M": 889, + "N": 722, + "O": 722, + "P": 611, + "Q": 722, + "R": 667, + "S": 556, + "T": 611, + "U": 722, + "V": 667, + "W": 889, + "X": 667, + "Y": 611, + "Z": 611, + "[": 333, + "\\": 278, + "]": 333, + "^": 570, + "_": 500, + "`": 333, + "a": 500, + "b": 500, + "c": 444, + "d": 500, + "e": 444, + "f": 333, + "g": 500, + "h": 556, + "i": 278, + "j": 278, + "k": 500, + "l": 278, + "m": 778, + "n": 556, + "o": 500, + "p": 500, + "q": 500, + "r": 389, + "s": 389, + "t": 278, + "u": 556, + "v": 444, + "w": 667, + "x": 500, + "y": 444, + "z": 389, + "{": 348, + "|": 220, + "}": 348, + "~": 570, + "\xa1": 389, + "\xa2": 500, + "\xa3": 500, + "\xa4": 500, + "\xa5": 500, + "\xa6": 220, + "\xa7": 500, + "\xa8": 333, + "\xa9": 747, + "\xaa": 266, + "\xab": 500, + "\xac": 606, + "\xae": 747, + "\xaf": 333, + "\xb0": 400, + "\xb1": 570, + "\xb2": 300, + "\xb3": 300, + "\xb4": 333, + "\xb5": 576, + "\xb6": 500, + "\xb7": 250, + "\xb8": 333, + "\xb9": 300, + "\xba": 300, + "\xbb": 500, + "\xbc": 750, + "\xbd": 750, + "\xbe": 750, + "\xbf": 500, + "\xc0": 667, + "\xc1": 667, + "\xc2": 667, + "\xc3": 667, + "\xc4": 667, + "\xc5": 667, + "\xc6": 944, + "\xc7": 667, + "\xc8": 667, + "\xc9": 667, + "\xca": 667, + "\xcb": 667, + "\xcc": 389, + "\xcd": 389, + "\xce": 389, + "\xcf": 389, + "\xd0": 722, + "\xd1": 722, + "\xd2": 722, + "\xd3": 722, + "\xd4": 722, + "\xd5": 722, + "\xd6": 722, + "\xd7": 570, + "\xd8": 722, + "\xd9": 722, + "\xda": 722, + "\xdb": 722, + "\xdc": 722, + "\xdd": 611, + "\xde": 611, + "\xdf": 500, + "\xe0": 500, + "\xe1": 500, + "\xe2": 500, + "\xe3": 500, + "\xe4": 500, + "\xe5": 500, + "\xe6": 722, + "\xe7": 444, + "\xe8": 444, + "\xe9": 444, + "\xea": 444, + "\xeb": 444, + "\xec": 278, + "\xed": 278, + "\xee": 278, + "\xef": 278, + "\xf0": 500, + "\xf1": 556, + "\xf2": 500, + "\xf3": 500, + "\xf4": 500, + "\xf5": 500, + "\xf6": 500, + "\xf7": 570, + "\xf8": 500, + "\xf9": 556, + "\xfa": 556, + "\xfb": 556, + "\xfc": 556, + "\xfd": 444, + "\xfe": 500, + "\xff": 444, + "\u0100": 667, + "\u0101": 500, + "\u0102": 667, + "\u0103": 500, + "\u0104": 667, + "\u0105": 500, + "\u0106": 667, + "\u0107": 444, + "\u010c": 667, + "\u010d": 444, + "\u010e": 722, + "\u010f": 608, + "\u0110": 722, + "\u0111": 500, + "\u0112": 667, + "\u0113": 444, + "\u0116": 667, + "\u0117": 444, + "\u0118": 667, + "\u0119": 444, + "\u011a": 667, + "\u011b": 444, + "\u011e": 722, + "\u011f": 500, + "\u0122": 722, + "\u0123": 500, + "\u012a": 389, + "\u012b": 278, + "\u012e": 389, + "\u012f": 278, + "\u0130": 389, + "\u0131": 278, + "\u0136": 667, + "\u0137": 500, + "\u0139": 611, + "\u013a": 278, + "\u013b": 611, + "\u013c": 278, + "\u013d": 611, + "\u013e": 382, + "\u0141": 611, + "\u0142": 278, + "\u0143": 722, + "\u0144": 556, + "\u0145": 722, + "\u0146": 556, + "\u0147": 722, + "\u0148": 556, + "\u014c": 722, + "\u014d": 500, + "\u0150": 722, + "\u0151": 500, + "\u0152": 944, + "\u0153": 722, + "\u0154": 667, + "\u0155": 389, + "\u0156": 667, + "\u0157": 389, + "\u0158": 667, + "\u0159": 389, + "\u015a": 556, + "\u015b": 389, + "\u015e": 556, + "\u015f": 389, + "\u0160": 556, + "\u0161": 389, + "\u0162": 611, + "\u0163": 278, + "\u0164": 611, + "\u0165": 366, + "\u016a": 722, + "\u016b": 556, + "\u016e": 722, + "\u016f": 556, + "\u0170": 722, + "\u0171": 556, + "\u0172": 722, + "\u0173": 556, + "\u0178": 611, + "\u0179": 611, + "\u017a": 389, + "\u017b": 611, + "\u017c": 389, + "\u017d": 611, + "\u017e": 389, + "\u0192": 500, + "\u0218": 556, + "\u0219": 389, + "\u02c6": 333, + "\u02c7": 333, + "\u02d8": 333, + "\u02d9": 333, + "\u02da": 333, + "\u02db": 333, + "\u02dc": 333, + "\u02dd": 333, + "\u2013": 500, + "\u2014": 1000, + "\u2018": 333, + "\u2019": 333, + "\u201a": 333, + "\u201c": 500, + "\u201d": 500, + "\u201e": 500, + "\u2020": 500, + "\u2021": 500, + "\u2022": 350, + "\u2026": 1000, + "\u2030": 1000, + "\u2039": 333, + "\u203a": 333, + "\u2044": 167, + "\u2122": 1000, + "\u2202": 494, + "\u2206": 612, + "\u2211": 600, + "\u2212": 606, + "\u221a": 549, + "\u2260": 549, + "\u2264": 549, + "\u2265": 549, + "\u25ca": 494, + "\uf6c3": 250, + "\ufb01": 556, + "\ufb02": 556, + }, + ), + "Times-Italic": ( + { + "FontName": "Times-Italic", + "Descent": -217.0, + "FontBBox": (-169.0, -217.0, 1010.0, 883.0), + "FontWeight": "Medium", + "CapHeight": 653.0, + "FontFamily": "Times", + "Flags": 0, + "XHeight": 441.0, + "ItalicAngle": -15.5, + "Ascent": 683.0, + }, + { + " ": 250, + "!": 333, + '"': 420, + "#": 500, + "$": 500, + "%": 833, + "&": 778, + "'": 214, + "(": 333, + ")": 333, + "*": 500, + "+": 675, + ",": 250, + "-": 333, + ".": 250, + "/": 278, + "0": 500, + "1": 500, + "2": 500, + "3": 500, + "4": 500, + "5": 500, + "6": 500, + "7": 500, + "8": 500, + "9": 500, + ":": 333, + ";": 333, + "<": 675, + "=": 675, + ">": 675, + "?": 500, + "@": 920, + "A": 611, + "B": 611, + "C": 667, + "D": 722, + "E": 611, + "F": 611, + "G": 722, + "H": 722, + "I": 333, + "J": 444, + "K": 667, + "L": 556, + "M": 833, + "N": 667, + "O": 722, + "P": 611, + "Q": 722, + "R": 611, + "S": 500, + "T": 556, + "U": 722, + "V": 611, + "W": 833, + "X": 611, + "Y": 556, + "Z": 556, + "[": 389, + "\\": 278, + "]": 389, + "^": 422, + "_": 500, + "`": 333, + "a": 500, + "b": 500, + "c": 444, + "d": 500, + "e": 444, + "f": 278, + "g": 500, + "h": 500, + "i": 278, + "j": 278, + "k": 444, + "l": 278, + "m": 722, + "n": 500, + "o": 500, + "p": 500, + "q": 500, + "r": 389, + "s": 389, + "t": 278, + "u": 500, + "v": 444, + "w": 667, + "x": 444, + "y": 444, + "z": 389, + "{": 400, + "|": 275, + "}": 400, + "~": 541, + "\xa1": 389, + "\xa2": 500, + "\xa3": 500, + "\xa4": 500, + "\xa5": 500, + "\xa6": 275, + "\xa7": 500, + "\xa8": 333, + "\xa9": 760, + "\xaa": 276, + "\xab": 500, + "\xac": 675, + "\xae": 760, + "\xaf": 333, + "\xb0": 400, + "\xb1": 675, + "\xb2": 300, + "\xb3": 300, + "\xb4": 333, + "\xb5": 500, + "\xb6": 523, + "\xb7": 250, + "\xb8": 333, + "\xb9": 300, + "\xba": 310, + "\xbb": 500, + "\xbc": 750, + "\xbd": 750, + "\xbe": 750, + "\xbf": 500, + "\xc0": 611, + "\xc1": 611, + "\xc2": 611, + "\xc3": 611, + "\xc4": 611, + "\xc5": 611, + "\xc6": 889, + "\xc7": 667, + "\xc8": 611, + "\xc9": 611, + "\xca": 611, + "\xcb": 611, + "\xcc": 333, + "\xcd": 333, + "\xce": 333, + "\xcf": 333, + "\xd0": 722, + "\xd1": 667, + "\xd2": 722, + "\xd3": 722, + "\xd4": 722, + "\xd5": 722, + "\xd6": 722, + "\xd7": 675, + "\xd8": 722, + "\xd9": 722, + "\xda": 722, + "\xdb": 722, + "\xdc": 722, + "\xdd": 556, + "\xde": 611, + "\xdf": 500, + "\xe0": 500, + "\xe1": 500, + "\xe2": 500, + "\xe3": 500, + "\xe4": 500, + "\xe5": 500, + "\xe6": 667, + "\xe7": 444, + "\xe8": 444, + "\xe9": 444, + "\xea": 444, + "\xeb": 444, + "\xec": 278, + "\xed": 278, + "\xee": 278, + "\xef": 278, + "\xf0": 500, + "\xf1": 500, + "\xf2": 500, + "\xf3": 500, + "\xf4": 500, + "\xf5": 500, + "\xf6": 500, + "\xf7": 675, + "\xf8": 500, + "\xf9": 500, + "\xfa": 500, + "\xfb": 500, + "\xfc": 500, + "\xfd": 444, + "\xfe": 500, + "\xff": 444, + "\u0100": 611, + "\u0101": 500, + "\u0102": 611, + "\u0103": 500, + "\u0104": 611, + "\u0105": 500, + "\u0106": 667, + "\u0107": 444, + "\u010c": 667, + "\u010d": 444, + "\u010e": 722, + "\u010f": 544, + "\u0110": 722, + "\u0111": 500, + "\u0112": 611, + "\u0113": 444, + "\u0116": 611, + "\u0117": 444, + "\u0118": 611, + "\u0119": 444, + "\u011a": 611, + "\u011b": 444, + "\u011e": 722, + "\u011f": 500, + "\u0122": 722, + "\u0123": 500, + "\u012a": 333, + "\u012b": 278, + "\u012e": 333, + "\u012f": 278, + "\u0130": 333, + "\u0131": 278, + "\u0136": 667, + "\u0137": 444, + "\u0139": 556, + "\u013a": 278, + "\u013b": 556, + "\u013c": 278, + "\u013d": 611, + "\u013e": 300, + "\u0141": 556, + "\u0142": 278, + "\u0143": 667, + "\u0144": 500, + "\u0145": 667, + "\u0146": 500, + "\u0147": 667, + "\u0148": 500, + "\u014c": 722, + "\u014d": 500, + "\u0150": 722, + "\u0151": 500, + "\u0152": 944, + "\u0153": 667, + "\u0154": 611, + "\u0155": 389, + "\u0156": 611, + "\u0157": 389, + "\u0158": 611, + "\u0159": 389, + "\u015a": 500, + "\u015b": 389, + "\u015e": 500, + "\u015f": 389, + "\u0160": 500, + "\u0161": 389, + "\u0162": 556, + "\u0163": 278, + "\u0164": 556, + "\u0165": 300, + "\u016a": 722, + "\u016b": 500, + "\u016e": 722, + "\u016f": 500, + "\u0170": 722, + "\u0171": 500, + "\u0172": 722, + "\u0173": 500, + "\u0178": 556, + "\u0179": 556, + "\u017a": 389, + "\u017b": 556, + "\u017c": 389, + "\u017d": 556, + "\u017e": 389, + "\u0192": 500, + "\u0218": 500, + "\u0219": 389, + "\u02c6": 333, + "\u02c7": 333, + "\u02d8": 333, + "\u02d9": 333, + "\u02da": 333, + "\u02db": 333, + "\u02dc": 333, + "\u02dd": 333, + "\u2013": 500, + "\u2014": 889, + "\u2018": 333, + "\u2019": 333, + "\u201a": 333, + "\u201c": 556, + "\u201d": 556, + "\u201e": 556, + "\u2020": 500, + "\u2021": 500, + "\u2022": 350, + "\u2026": 889, + "\u2030": 1000, + "\u2039": 333, + "\u203a": 333, + "\u2044": 167, + "\u2122": 980, + "\u2202": 476, + "\u2206": 612, + "\u2211": 600, + "\u2212": 675, + "\u221a": 453, + "\u2260": 549, + "\u2264": 549, + "\u2265": 549, + "\u25ca": 471, + "\uf6c3": 250, + "\ufb01": 500, + "\ufb02": 500, + }, + ), + "Times-Roman": ( + { + "FontName": "Times-Roman", + "Descent": -217.0, + "FontBBox": (-168.0, -218.0, 1000.0, 898.0), + "FontWeight": "Roman", + "CapHeight": 662.0, + "FontFamily": "Times", + "Flags": 0, + "XHeight": 450.0, + "ItalicAngle": 0.0, + "Ascent": 683.0, + }, + { + " ": 250, + "!": 333, + '"': 408, + "#": 500, + "$": 500, + "%": 833, + "&": 778, + "'": 180, + "(": 333, + ")": 333, + "*": 500, + "+": 564, + ",": 250, + "-": 333, + ".": 250, + "/": 278, + "0": 500, + "1": 500, + "2": 500, + "3": 500, + "4": 500, + "5": 500, + "6": 500, + "7": 500, + "8": 500, + "9": 500, + ":": 278, + ";": 278, + "<": 564, + "=": 564, + ">": 564, + "?": 444, + "@": 921, + "A": 722, + "B": 667, + "C": 667, + "D": 722, + "E": 611, + "F": 556, + "G": 722, + "H": 722, + "I": 333, + "J": 389, + "K": 722, + "L": 611, + "M": 889, + "N": 722, + "O": 722, + "P": 556, + "Q": 722, + "R": 667, + "S": 556, + "T": 611, + "U": 722, + "V": 722, + "W": 944, + "X": 722, + "Y": 722, + "Z": 611, + "[": 333, + "\\": 278, + "]": 333, + "^": 469, + "_": 500, + "`": 333, + "a": 444, + "b": 500, + "c": 444, + "d": 500, + "e": 444, + "f": 333, + "g": 500, + "h": 500, + "i": 278, + "j": 278, + "k": 500, + "l": 278, + "m": 778, + "n": 500, + "o": 500, + "p": 500, + "q": 500, + "r": 333, + "s": 389, + "t": 278, + "u": 500, + "v": 500, + "w": 722, + "x": 500, + "y": 500, + "z": 444, + "{": 480, + "|": 200, + "}": 480, + "~": 541, + "\xa1": 333, + "\xa2": 500, + "\xa3": 500, + "\xa4": 500, + "\xa5": 500, + "\xa6": 200, + "\xa7": 500, + "\xa8": 333, + "\xa9": 760, + "\xaa": 276, + "\xab": 500, + "\xac": 564, + "\xae": 760, + "\xaf": 333, + "\xb0": 400, + "\xb1": 564, + "\xb2": 300, + "\xb3": 300, + "\xb4": 333, + "\xb5": 500, + "\xb6": 453, + "\xb7": 250, + "\xb8": 333, + "\xb9": 300, + "\xba": 310, + "\xbb": 500, + "\xbc": 750, + "\xbd": 750, + "\xbe": 750, + "\xbf": 444, + "\xc0": 722, + "\xc1": 722, + "\xc2": 722, + "\xc3": 722, + "\xc4": 722, + "\xc5": 722, + "\xc6": 889, + "\xc7": 667, + "\xc8": 611, + "\xc9": 611, + "\xca": 611, + "\xcb": 611, + "\xcc": 333, + "\xcd": 333, + "\xce": 333, + "\xcf": 333, + "\xd0": 722, + "\xd1": 722, + "\xd2": 722, + "\xd3": 722, + "\xd4": 722, + "\xd5": 722, + "\xd6": 722, + "\xd7": 564, + "\xd8": 722, + "\xd9": 722, + "\xda": 722, + "\xdb": 722, + "\xdc": 722, + "\xdd": 722, + "\xde": 556, + "\xdf": 500, + "\xe0": 444, + "\xe1": 444, + "\xe2": 444, + "\xe3": 444, + "\xe4": 444, + "\xe5": 444, + "\xe6": 667, + "\xe7": 444, + "\xe8": 444, + "\xe9": 444, + "\xea": 444, + "\xeb": 444, + "\xec": 278, + "\xed": 278, + "\xee": 278, + "\xef": 278, + "\xf0": 500, + "\xf1": 500, + "\xf2": 500, + "\xf3": 500, + "\xf4": 500, + "\xf5": 500, + "\xf6": 500, + "\xf7": 564, + "\xf8": 500, + "\xf9": 500, + "\xfa": 500, + "\xfb": 500, + "\xfc": 500, + "\xfd": 500, + "\xfe": 500, + "\xff": 500, + "\u0100": 722, + "\u0101": 444, + "\u0102": 722, + "\u0103": 444, + "\u0104": 722, + "\u0105": 444, + "\u0106": 667, + "\u0107": 444, + "\u010c": 667, + "\u010d": 444, + "\u010e": 722, + "\u010f": 588, + "\u0110": 722, + "\u0111": 500, + "\u0112": 611, + "\u0113": 444, + "\u0116": 611, + "\u0117": 444, + "\u0118": 611, + "\u0119": 444, + "\u011a": 611, + "\u011b": 444, + "\u011e": 722, + "\u011f": 500, + "\u0122": 722, + "\u0123": 500, + "\u012a": 333, + "\u012b": 278, + "\u012e": 333, + "\u012f": 278, + "\u0130": 333, + "\u0131": 278, + "\u0136": 722, + "\u0137": 500, + "\u0139": 611, + "\u013a": 278, + "\u013b": 611, + "\u013c": 278, + "\u013d": 611, + "\u013e": 344, + "\u0141": 611, + "\u0142": 278, + "\u0143": 722, + "\u0144": 500, + "\u0145": 722, + "\u0146": 500, + "\u0147": 722, + "\u0148": 500, + "\u014c": 722, + "\u014d": 500, + "\u0150": 722, + "\u0151": 500, + "\u0152": 889, + "\u0153": 722, + "\u0154": 667, + "\u0155": 333, + "\u0156": 667, + "\u0157": 333, + "\u0158": 667, + "\u0159": 333, + "\u015a": 556, + "\u015b": 389, + "\u015e": 556, + "\u015f": 389, + "\u0160": 556, + "\u0161": 389, + "\u0162": 611, + "\u0163": 278, + "\u0164": 611, + "\u0165": 326, + "\u016a": 722, + "\u016b": 500, + "\u016e": 722, + "\u016f": 500, + "\u0170": 722, + "\u0171": 500, + "\u0172": 722, + "\u0173": 500, + "\u0178": 722, + "\u0179": 611, + "\u017a": 444, + "\u017b": 611, + "\u017c": 444, + "\u017d": 611, + "\u017e": 444, + "\u0192": 500, + "\u0218": 556, + "\u0219": 389, + "\u02c6": 333, + "\u02c7": 333, + "\u02d8": 333, + "\u02d9": 333, + "\u02da": 333, + "\u02db": 333, + "\u02dc": 333, + "\u02dd": 333, + "\u2013": 500, + "\u2014": 1000, + "\u2018": 333, + "\u2019": 333, + "\u201a": 333, + "\u201c": 444, + "\u201d": 444, + "\u201e": 444, + "\u2020": 500, + "\u2021": 500, + "\u2022": 350, + "\u2026": 1000, + "\u2030": 1000, + "\u2039": 333, + "\u203a": 333, + "\u2044": 167, + "\u2122": 980, + "\u2202": 476, + "\u2206": 612, + "\u2211": 600, + "\u2212": 564, + "\u221a": 453, + "\u2260": 549, + "\u2264": 549, + "\u2265": 549, + "\u25ca": 471, + "\uf6c3": 250, + "\ufb01": 556, + "\ufb02": 556, + }, + ), + "ZapfDingbats": ( + { + "FontName": "ZapfDingbats", + "FontBBox": (-1.0, -143.0, 981.0, 820.0), + "FontWeight": "Medium", + "FontFamily": "ITC", + "Flags": 0, + "ItalicAngle": 0.0, + }, + { + "\x01": 974, + "\x02": 961, + "\x03": 980, + "\x04": 719, + "\x05": 789, + "\x06": 494, + "\x07": 552, + "\x08": 537, + "\t": 577, + "\n": 692, + "\x0b": 960, + "\x0c": 939, + "\r": 549, + "\x0e": 855, + "\x0f": 911, + "\x10": 933, + "\x11": 945, + "\x12": 974, + "\x13": 755, + "\x14": 846, + "\x15": 762, + "\x16": 761, + "\x17": 571, + "\x18": 677, + "\x19": 763, + "\x1a": 760, + "\x1b": 759, + "\x1c": 754, + "\x1d": 786, + "\x1e": 788, + "\x1f": 788, + " ": 790, + "!": 793, + '"': 794, + "#": 816, + "$": 823, + "%": 789, + "&": 841, + "'": 823, + "(": 833, + ")": 816, + "*": 831, + "+": 923, + ",": 744, + "-": 723, + ".": 749, + "/": 790, + "0": 792, + "1": 695, + "2": 776, + "3": 768, + "4": 792, + "5": 759, + "6": 707, + "7": 708, + "8": 682, + "9": 701, + ":": 826, + ";": 815, + "<": 789, + "=": 789, + ">": 707, + "?": 687, + "@": 696, + "A": 689, + "B": 786, + "C": 787, + "D": 713, + "E": 791, + "F": 785, + "G": 791, + "H": 873, + "I": 761, + "J": 762, + "K": 759, + "L": 892, + "M": 892, + "N": 788, + "O": 784, + "Q": 438, + "R": 138, + "S": 277, + "T": 415, + "U": 509, + "V": 410, + "W": 234, + "X": 234, + "Y": 390, + "Z": 390, + "[": 276, + "\\": 276, + "]": 317, + "^": 317, + "_": 334, + "`": 334, + "a": 392, + "b": 392, + "c": 668, + "d": 668, + "e": 732, + "f": 544, + "g": 544, + "h": 910, + "i": 911, + "j": 667, + "k": 760, + "l": 760, + "m": 626, + "n": 694, + "o": 595, + "p": 776, + "u": 690, + "v": 791, + "w": 790, + "x": 788, + "y": 788, + "z": 788, + "{": 788, + "|": 788, + "}": 788, + "~": 788, + "\x7f": 788, + "\x80": 788, + "\x81": 788, + "\x82": 788, + "\x83": 788, + "\x84": 788, + "\x85": 788, + "\x86": 788, + "\x87": 788, + "\x88": 788, + "\x89": 788, + "\x8a": 788, + "\x8b": 788, + "\x8c": 788, + "\x8d": 788, + "\x8e": 788, + "\x8f": 788, + "\x90": 788, + "\x91": 788, + "\x92": 788, + "\x93": 788, + "\x94": 788, + "\x95": 788, + "\x96": 788, + "\x97": 788, + "\x98": 788, + "\x99": 788, + "\x9a": 788, + "\x9b": 788, + "\x9c": 788, + "\x9d": 788, + "\x9e": 788, + "\x9f": 788, + "\xa0": 894, + "\xa1": 838, + "\xa2": 924, + "\xa3": 1016, + "\xa4": 458, + "\xa5": 924, + "\xa6": 918, + "\xa7": 927, + "\xa8": 928, + "\xa9": 928, + "\xaa": 834, + "\xab": 873, + "\xac": 828, + "\xad": 924, + "\xae": 917, + "\xaf": 930, + "\xb0": 931, + "\xb1": 463, + "\xb2": 883, + "\xb3": 836, + "\xb4": 867, + "\xb5": 696, + "\xb6": 874, + "\xb7": 760, + "\xb8": 946, + "\xb9": 865, + "\xba": 967, + "\xbb": 831, + "\xbc": 873, + "\xbd": 927, + "\xbe": 970, + "\xbf": 918, + "\xc0": 748, + "\xc1": 836, + "\xc2": 771, + "\xc3": 888, + "\xc4": 748, + "\xc5": 771, + "\xc6": 888, + "\xc7": 867, + "\xc8": 696, + "\xc9": 874, + "\xca": 974, + "\xcb": 762, + "\xcc": 759, + "\xcd": 509, + "\xce": 410, + }, + ), +} + +# Aliases defined in implementation note 62 in Appecix H. related to section 5.5.1 +# (Type 1 Fonts) in the PDF Reference. +FONT_METRICS["Arial"] = FONT_METRICS["Helvetica"] +FONT_METRICS["Arial,Italic"] = FONT_METRICS["Helvetica-Oblique"] +FONT_METRICS["Arial,Bold"] = FONT_METRICS["Helvetica-Bold"] +FONT_METRICS["Arial,BoldItalic"] = FONT_METRICS["Helvetica-BoldOblique"] +FONT_METRICS["CourierNew"] = FONT_METRICS["Courier"] +FONT_METRICS["CourierNew,Italic"] = FONT_METRICS["Courier-Oblique"] +FONT_METRICS["CourierNew,Bold"] = FONT_METRICS["Courier-Bold"] +FONT_METRICS["CourierNew,BoldItalic"] = FONT_METRICS["Courier-BoldOblique"] +FONT_METRICS["TimesNewRoman"] = FONT_METRICS["Times-Roman"] +FONT_METRICS["TimesNewRoman,Italic"] = FONT_METRICS["Times-Italic"] +FONT_METRICS["TimesNewRoman,Bold"] = FONT_METRICS["Times-Bold"] +FONT_METRICS["TimesNewRoman,BoldItalic"] = FONT_METRICS["Times-BoldItalic"] diff --git a/templates/skills/file_manager/dependencies/pdfminer/glyphlist.py b/templates/skills/file_manager/dependencies/pdfminer/glyphlist.py new file mode 100644 index 00000000..9e5135f3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/glyphlist.py @@ -0,0 +1,4363 @@ +""" Mappings from Adobe glyph names to Unicode characters. + +In some CMap tables, Adobe glyph names are used for specifying +Unicode characters instead of using decimal/hex character code. + +The following data was taken by + + $ wget https://partners.adobe.com/public/developer/en/opentype/glyphlist.txt + $ python tools/conv_glyphlist.py glyphlist.txt > glyphlist.py + +""" + +# ################################################################################### +# Copyright (c) 1997,1998,2002,2007 Adobe Systems Incorporated +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this documentation file to use, copy, publish, distribute, +# sublicense, and/or sell copies of the documentation, and to permit +# others to do the same, provided that: +# - No modification, editing or other alteration of this document is +# allowed; and +# - The above copyright notice and this permission notice shall be +# included in all copies of the documentation. +# +# Permission is hereby granted, free of charge, to any person obtaining a +# copy of this documentation file, to create their own derivative works +# from the content of this document to use, copy, publish, distribute, +# sublicense, and/or sell the derivative works, and to permit others to do +# the same, provided that the derived work is not represented as being a +# copy or version of this document. +# +# Adobe shall not be liable to any party for any loss of revenue or profit +# or for indirect, incidental, special, consequential, or other similar +# damages, whether based on tort (including without limitation negligence +# or strict liability), contract or other legal or equitable grounds even +# if Adobe has been advised or had reason to know of the possibility of +# such damages. The Adobe materials are provided on an "AS IS" basis. +# Adobe specifically disclaims all express, statutory, or implied +# warranties relating to the Adobe materials, including but not limited to +# those concerning merchantability or fitness for a particular purpose or +# non-infringement of any third party rights regarding the Adobe +# materials. +# ################################################################################### +# Name: Adobe Glyph List +# Table version: 2.0 +# Date: September 20, 2002 +# +# See http://partners.adobe.com/asn/developer/typeforum/unicodegn.html +# +# Format: Semicolon-delimited fields: +# (1) glyph name +# (2) Unicode scalar value + + +def convert_glyphlist(path: str) -> None: + """Convert a glyph list into a python representation. + + See output below. + """ + state = 0 + with open(path) as fileinput: + for line in fileinput.readlines(): + line = line.strip() + if not line or line.startswith("#"): + if state == 1: + state = 2 + print("}\n") + print(line) + continue + if state == 0: + print("\nglyphname2unicode = {") + state = 1 + (name, x) = line.split(";") + codes = x.split(" ") + print( + " {!r}: u'{}',".format(name, "".join("\\u%s" % code for code in codes)) + ) + + +glyphname2unicode = { + "A": "\u0041", + "AE": "\u00C6", + "AEacute": "\u01FC", + "AEmacron": "\u01E2", + "AEsmall": "\uF7E6", + "Aacute": "\u00C1", + "Aacutesmall": "\uF7E1", + "Abreve": "\u0102", + "Abreveacute": "\u1EAE", + "Abrevecyrillic": "\u04D0", + "Abrevedotbelow": "\u1EB6", + "Abrevegrave": "\u1EB0", + "Abrevehookabove": "\u1EB2", + "Abrevetilde": "\u1EB4", + "Acaron": "\u01CD", + "Acircle": "\u24B6", + "Acircumflex": "\u00C2", + "Acircumflexacute": "\u1EA4", + "Acircumflexdotbelow": "\u1EAC", + "Acircumflexgrave": "\u1EA6", + "Acircumflexhookabove": "\u1EA8", + "Acircumflexsmall": "\uF7E2", + "Acircumflextilde": "\u1EAA", + "Acute": "\uF6C9", + "Acutesmall": "\uF7B4", + "Acyrillic": "\u0410", + "Adblgrave": "\u0200", + "Adieresis": "\u00C4", + "Adieresiscyrillic": "\u04D2", + "Adieresismacron": "\u01DE", + "Adieresissmall": "\uF7E4", + "Adotbelow": "\u1EA0", + "Adotmacron": "\u01E0", + "Agrave": "\u00C0", + "Agravesmall": "\uF7E0", + "Ahookabove": "\u1EA2", + "Aiecyrillic": "\u04D4", + "Ainvertedbreve": "\u0202", + "Alpha": "\u0391", + "Alphatonos": "\u0386", + "Amacron": "\u0100", + "Amonospace": "\uFF21", + "Aogonek": "\u0104", + "Aring": "\u00C5", + "Aringacute": "\u01FA", + "Aringbelow": "\u1E00", + "Aringsmall": "\uF7E5", + "Asmall": "\uF761", + "Atilde": "\u00C3", + "Atildesmall": "\uF7E3", + "Aybarmenian": "\u0531", + "B": "\u0042", + "Bcircle": "\u24B7", + "Bdotaccent": "\u1E02", + "Bdotbelow": "\u1E04", + "Becyrillic": "\u0411", + "Benarmenian": "\u0532", + "Beta": "\u0392", + "Bhook": "\u0181", + "Blinebelow": "\u1E06", + "Bmonospace": "\uFF22", + "Brevesmall": "\uF6F4", + "Bsmall": "\uF762", + "Btopbar": "\u0182", + "C": "\u0043", + "Caarmenian": "\u053E", + "Cacute": "\u0106", + "Caron": "\uF6CA", + "Caronsmall": "\uF6F5", + "Ccaron": "\u010C", + "Ccedilla": "\u00C7", + "Ccedillaacute": "\u1E08", + "Ccedillasmall": "\uF7E7", + "Ccircle": "\u24B8", + "Ccircumflex": "\u0108", + "Cdot": "\u010A", + "Cdotaccent": "\u010A", + "Cedillasmall": "\uF7B8", + "Chaarmenian": "\u0549", + "Cheabkhasiancyrillic": "\u04BC", + "Checyrillic": "\u0427", + "Chedescenderabkhasiancyrillic": "\u04BE", + "Chedescendercyrillic": "\u04B6", + "Chedieresiscyrillic": "\u04F4", + "Cheharmenian": "\u0543", + "Chekhakassiancyrillic": "\u04CB", + "Cheverticalstrokecyrillic": "\u04B8", + "Chi": "\u03A7", + "Chook": "\u0187", + "Circumflexsmall": "\uF6F6", + "Cmonospace": "\uFF23", + "Coarmenian": "\u0551", + "Csmall": "\uF763", + "D": "\u0044", + "DZ": "\u01F1", + "DZcaron": "\u01C4", + "Daarmenian": "\u0534", + "Dafrican": "\u0189", + "Dcaron": "\u010E", + "Dcedilla": "\u1E10", + "Dcircle": "\u24B9", + "Dcircumflexbelow": "\u1E12", + "Dcroat": "\u0110", + "Ddotaccent": "\u1E0A", + "Ddotbelow": "\u1E0C", + "Decyrillic": "\u0414", + "Deicoptic": "\u03EE", + "Delta": "\u2206", + "Deltagreek": "\u0394", + "Dhook": "\u018A", + "Dieresis": "\uF6CB", + "DieresisAcute": "\uF6CC", + "DieresisGrave": "\uF6CD", + "Dieresissmall": "\uF7A8", + "Digammagreek": "\u03DC", + "Djecyrillic": "\u0402", + "Dlinebelow": "\u1E0E", + "Dmonospace": "\uFF24", + "Dotaccentsmall": "\uF6F7", + "Dslash": "\u0110", + "Dsmall": "\uF764", + "Dtopbar": "\u018B", + "Dz": "\u01F2", + "Dzcaron": "\u01C5", + "Dzeabkhasiancyrillic": "\u04E0", + "Dzecyrillic": "\u0405", + "Dzhecyrillic": "\u040F", + "E": "\u0045", + "Eacute": "\u00C9", + "Eacutesmall": "\uF7E9", + "Ebreve": "\u0114", + "Ecaron": "\u011A", + "Ecedillabreve": "\u1E1C", + "Echarmenian": "\u0535", + "Ecircle": "\u24BA", + "Ecircumflex": "\u00CA", + "Ecircumflexacute": "\u1EBE", + "Ecircumflexbelow": "\u1E18", + "Ecircumflexdotbelow": "\u1EC6", + "Ecircumflexgrave": "\u1EC0", + "Ecircumflexhookabove": "\u1EC2", + "Ecircumflexsmall": "\uF7EA", + "Ecircumflextilde": "\u1EC4", + "Ecyrillic": "\u0404", + "Edblgrave": "\u0204", + "Edieresis": "\u00CB", + "Edieresissmall": "\uF7EB", + "Edot": "\u0116", + "Edotaccent": "\u0116", + "Edotbelow": "\u1EB8", + "Efcyrillic": "\u0424", + "Egrave": "\u00C8", + "Egravesmall": "\uF7E8", + "Eharmenian": "\u0537", + "Ehookabove": "\u1EBA", + "Eightroman": "\u2167", + "Einvertedbreve": "\u0206", + "Eiotifiedcyrillic": "\u0464", + "Elcyrillic": "\u041B", + "Elevenroman": "\u216A", + "Emacron": "\u0112", + "Emacronacute": "\u1E16", + "Emacrongrave": "\u1E14", + "Emcyrillic": "\u041C", + "Emonospace": "\uFF25", + "Encyrillic": "\u041D", + "Endescendercyrillic": "\u04A2", + "Eng": "\u014A", + "Enghecyrillic": "\u04A4", + "Enhookcyrillic": "\u04C7", + "Eogonek": "\u0118", + "Eopen": "\u0190", + "Epsilon": "\u0395", + "Epsilontonos": "\u0388", + "Ercyrillic": "\u0420", + "Ereversed": "\u018E", + "Ereversedcyrillic": "\u042D", + "Escyrillic": "\u0421", + "Esdescendercyrillic": "\u04AA", + "Esh": "\u01A9", + "Esmall": "\uF765", + "Eta": "\u0397", + "Etarmenian": "\u0538", + "Etatonos": "\u0389", + "Eth": "\u00D0", + "Ethsmall": "\uF7F0", + "Etilde": "\u1EBC", + "Etildebelow": "\u1E1A", + "Euro": "\u20AC", + "Ezh": "\u01B7", + "Ezhcaron": "\u01EE", + "Ezhreversed": "\u01B8", + "F": "\u0046", + "Fcircle": "\u24BB", + "Fdotaccent": "\u1E1E", + "Feharmenian": "\u0556", + "Feicoptic": "\u03E4", + "Fhook": "\u0191", + "Fitacyrillic": "\u0472", + "Fiveroman": "\u2164", + "Fmonospace": "\uFF26", + "Fourroman": "\u2163", + "Fsmall": "\uF766", + "G": "\u0047", + "GBsquare": "\u3387", + "Gacute": "\u01F4", + "Gamma": "\u0393", + "Gammaafrican": "\u0194", + "Gangiacoptic": "\u03EA", + "Gbreve": "\u011E", + "Gcaron": "\u01E6", + "Gcedilla": "\u0122", + "Gcircle": "\u24BC", + "Gcircumflex": "\u011C", + "Gcommaaccent": "\u0122", + "Gdot": "\u0120", + "Gdotaccent": "\u0120", + "Gecyrillic": "\u0413", + "Ghadarmenian": "\u0542", + "Ghemiddlehookcyrillic": "\u0494", + "Ghestrokecyrillic": "\u0492", + "Gheupturncyrillic": "\u0490", + "Ghook": "\u0193", + "Gimarmenian": "\u0533", + "Gjecyrillic": "\u0403", + "Gmacron": "\u1E20", + "Gmonospace": "\uFF27", + "Grave": "\uF6CE", + "Gravesmall": "\uF760", + "Gsmall": "\uF767", + "Gsmallhook": "\u029B", + "Gstroke": "\u01E4", + "H": "\u0048", + "H18533": "\u25CF", + "H18543": "\u25AA", + "H18551": "\u25AB", + "H22073": "\u25A1", + "HPsquare": "\u33CB", + "Haabkhasiancyrillic": "\u04A8", + "Hadescendercyrillic": "\u04B2", + "Hardsigncyrillic": "\u042A", + "Hbar": "\u0126", + "Hbrevebelow": "\u1E2A", + "Hcedilla": "\u1E28", + "Hcircle": "\u24BD", + "Hcircumflex": "\u0124", + "Hdieresis": "\u1E26", + "Hdotaccent": "\u1E22", + "Hdotbelow": "\u1E24", + "Hmonospace": "\uFF28", + "Hoarmenian": "\u0540", + "Horicoptic": "\u03E8", + "Hsmall": "\uF768", + "Hungarumlaut": "\uF6CF", + "Hungarumlautsmall": "\uF6F8", + "Hzsquare": "\u3390", + "I": "\u0049", + "IAcyrillic": "\u042F", + "IJ": "\u0132", + "IUcyrillic": "\u042E", + "Iacute": "\u00CD", + "Iacutesmall": "\uF7ED", + "Ibreve": "\u012C", + "Icaron": "\u01CF", + "Icircle": "\u24BE", + "Icircumflex": "\u00CE", + "Icircumflexsmall": "\uF7EE", + "Icyrillic": "\u0406", + "Idblgrave": "\u0208", + "Idieresis": "\u00CF", + "Idieresisacute": "\u1E2E", + "Idieresiscyrillic": "\u04E4", + "Idieresissmall": "\uF7EF", + "Idot": "\u0130", + "Idotaccent": "\u0130", + "Idotbelow": "\u1ECA", + "Iebrevecyrillic": "\u04D6", + "Iecyrillic": "\u0415", + "Ifraktur": "\u2111", + "Igrave": "\u00CC", + "Igravesmall": "\uF7EC", + "Ihookabove": "\u1EC8", + "Iicyrillic": "\u0418", + "Iinvertedbreve": "\u020A", + "Iishortcyrillic": "\u0419", + "Imacron": "\u012A", + "Imacroncyrillic": "\u04E2", + "Imonospace": "\uFF29", + "Iniarmenian": "\u053B", + "Iocyrillic": "\u0401", + "Iogonek": "\u012E", + "Iota": "\u0399", + "Iotaafrican": "\u0196", + "Iotadieresis": "\u03AA", + "Iotatonos": "\u038A", + "Ismall": "\uF769", + "Istroke": "\u0197", + "Itilde": "\u0128", + "Itildebelow": "\u1E2C", + "Izhitsacyrillic": "\u0474", + "Izhitsadblgravecyrillic": "\u0476", + "J": "\u004A", + "Jaarmenian": "\u0541", + "Jcircle": "\u24BF", + "Jcircumflex": "\u0134", + "Jecyrillic": "\u0408", + "Jheharmenian": "\u054B", + "Jmonospace": "\uFF2A", + "Jsmall": "\uF76A", + "K": "\u004B", + "KBsquare": "\u3385", + "KKsquare": "\u33CD", + "Kabashkircyrillic": "\u04A0", + "Kacute": "\u1E30", + "Kacyrillic": "\u041A", + "Kadescendercyrillic": "\u049A", + "Kahookcyrillic": "\u04C3", + "Kappa": "\u039A", + "Kastrokecyrillic": "\u049E", + "Kaverticalstrokecyrillic": "\u049C", + "Kcaron": "\u01E8", + "Kcedilla": "\u0136", + "Kcircle": "\u24C0", + "Kcommaaccent": "\u0136", + "Kdotbelow": "\u1E32", + "Keharmenian": "\u0554", + "Kenarmenian": "\u053F", + "Khacyrillic": "\u0425", + "Kheicoptic": "\u03E6", + "Khook": "\u0198", + "Kjecyrillic": "\u040C", + "Klinebelow": "\u1E34", + "Kmonospace": "\uFF2B", + "Koppacyrillic": "\u0480", + "Koppagreek": "\u03DE", + "Ksicyrillic": "\u046E", + "Ksmall": "\uF76B", + "L": "\u004C", + "LJ": "\u01C7", + "LL": "\uF6BF", + "Lacute": "\u0139", + "Lambda": "\u039B", + "Lcaron": "\u013D", + "Lcedilla": "\u013B", + "Lcircle": "\u24C1", + "Lcircumflexbelow": "\u1E3C", + "Lcommaaccent": "\u013B", + "Ldot": "\u013F", + "Ldotaccent": "\u013F", + "Ldotbelow": "\u1E36", + "Ldotbelowmacron": "\u1E38", + "Liwnarmenian": "\u053C", + "Lj": "\u01C8", + "Ljecyrillic": "\u0409", + "Llinebelow": "\u1E3A", + "Lmonospace": "\uFF2C", + "Lslash": "\u0141", + "Lslashsmall": "\uF6F9", + "Lsmall": "\uF76C", + "M": "\u004D", + "MBsquare": "\u3386", + "Macron": "\uF6D0", + "Macronsmall": "\uF7AF", + "Macute": "\u1E3E", + "Mcircle": "\u24C2", + "Mdotaccent": "\u1E40", + "Mdotbelow": "\u1E42", + "Menarmenian": "\u0544", + "Mmonospace": "\uFF2D", + "Msmall": "\uF76D", + "Mturned": "\u019C", + "Mu": "\u039C", + "N": "\u004E", + "NJ": "\u01CA", + "Nacute": "\u0143", + "Ncaron": "\u0147", + "Ncedilla": "\u0145", + "Ncircle": "\u24C3", + "Ncircumflexbelow": "\u1E4A", + "Ncommaaccent": "\u0145", + "Ndotaccent": "\u1E44", + "Ndotbelow": "\u1E46", + "Nhookleft": "\u019D", + "Nineroman": "\u2168", + "Nj": "\u01CB", + "Njecyrillic": "\u040A", + "Nlinebelow": "\u1E48", + "Nmonospace": "\uFF2E", + "Nowarmenian": "\u0546", + "Nsmall": "\uF76E", + "Ntilde": "\u00D1", + "Ntildesmall": "\uF7F1", + "Nu": "\u039D", + "O": "\u004F", + "OE": "\u0152", + "OEsmall": "\uF6FA", + "Oacute": "\u00D3", + "Oacutesmall": "\uF7F3", + "Obarredcyrillic": "\u04E8", + "Obarreddieresiscyrillic": "\u04EA", + "Obreve": "\u014E", + "Ocaron": "\u01D1", + "Ocenteredtilde": "\u019F", + "Ocircle": "\u24C4", + "Ocircumflex": "\u00D4", + "Ocircumflexacute": "\u1ED0", + "Ocircumflexdotbelow": "\u1ED8", + "Ocircumflexgrave": "\u1ED2", + "Ocircumflexhookabove": "\u1ED4", + "Ocircumflexsmall": "\uF7F4", + "Ocircumflextilde": "\u1ED6", + "Ocyrillic": "\u041E", + "Odblacute": "\u0150", + "Odblgrave": "\u020C", + "Odieresis": "\u00D6", + "Odieresiscyrillic": "\u04E6", + "Odieresissmall": "\uF7F6", + "Odotbelow": "\u1ECC", + "Ogoneksmall": "\uF6FB", + "Ograve": "\u00D2", + "Ogravesmall": "\uF7F2", + "Oharmenian": "\u0555", + "Ohm": "\u2126", + "Ohookabove": "\u1ECE", + "Ohorn": "\u01A0", + "Ohornacute": "\u1EDA", + "Ohorndotbelow": "\u1EE2", + "Ohorngrave": "\u1EDC", + "Ohornhookabove": "\u1EDE", + "Ohorntilde": "\u1EE0", + "Ohungarumlaut": "\u0150", + "Oi": "\u01A2", + "Oinvertedbreve": "\u020E", + "Omacron": "\u014C", + "Omacronacute": "\u1E52", + "Omacrongrave": "\u1E50", + "Omega": "\u2126", + "Omegacyrillic": "\u0460", + "Omegagreek": "\u03A9", + "Omegaroundcyrillic": "\u047A", + "Omegatitlocyrillic": "\u047C", + "Omegatonos": "\u038F", + "Omicron": "\u039F", + "Omicrontonos": "\u038C", + "Omonospace": "\uFF2F", + "Oneroman": "\u2160", + "Oogonek": "\u01EA", + "Oogonekmacron": "\u01EC", + "Oopen": "\u0186", + "Oslash": "\u00D8", + "Oslashacute": "\u01FE", + "Oslashsmall": "\uF7F8", + "Osmall": "\uF76F", + "Ostrokeacute": "\u01FE", + "Otcyrillic": "\u047E", + "Otilde": "\u00D5", + "Otildeacute": "\u1E4C", + "Otildedieresis": "\u1E4E", + "Otildesmall": "\uF7F5", + "P": "\u0050", + "Pacute": "\u1E54", + "Pcircle": "\u24C5", + "Pdotaccent": "\u1E56", + "Pecyrillic": "\u041F", + "Peharmenian": "\u054A", + "Pemiddlehookcyrillic": "\u04A6", + "Phi": "\u03A6", + "Phook": "\u01A4", + "Pi": "\u03A0", + "Piwrarmenian": "\u0553", + "Pmonospace": "\uFF30", + "Psi": "\u03A8", + "Psicyrillic": "\u0470", + "Psmall": "\uF770", + "Q": "\u0051", + "Qcircle": "\u24C6", + "Qmonospace": "\uFF31", + "Qsmall": "\uF771", + "R": "\u0052", + "Raarmenian": "\u054C", + "Racute": "\u0154", + "Rcaron": "\u0158", + "Rcedilla": "\u0156", + "Rcircle": "\u24C7", + "Rcommaaccent": "\u0156", + "Rdblgrave": "\u0210", + "Rdotaccent": "\u1E58", + "Rdotbelow": "\u1E5A", + "Rdotbelowmacron": "\u1E5C", + "Reharmenian": "\u0550", + "Rfraktur": "\u211C", + "Rho": "\u03A1", + "Ringsmall": "\uF6FC", + "Rinvertedbreve": "\u0212", + "Rlinebelow": "\u1E5E", + "Rmonospace": "\uFF32", + "Rsmall": "\uF772", + "Rsmallinverted": "\u0281", + "Rsmallinvertedsuperior": "\u02B6", + "S": "\u0053", + "SF010000": "\u250C", + "SF020000": "\u2514", + "SF030000": "\u2510", + "SF040000": "\u2518", + "SF050000": "\u253C", + "SF060000": "\u252C", + "SF070000": "\u2534", + "SF080000": "\u251C", + "SF090000": "\u2524", + "SF100000": "\u2500", + "SF110000": "\u2502", + "SF190000": "\u2561", + "SF200000": "\u2562", + "SF210000": "\u2556", + "SF220000": "\u2555", + "SF230000": "\u2563", + "SF240000": "\u2551", + "SF250000": "\u2557", + "SF260000": "\u255D", + "SF270000": "\u255C", + "SF280000": "\u255B", + "SF360000": "\u255E", + "SF370000": "\u255F", + "SF380000": "\u255A", + "SF390000": "\u2554", + "SF400000": "\u2569", + "SF410000": "\u2566", + "SF420000": "\u2560", + "SF430000": "\u2550", + "SF440000": "\u256C", + "SF450000": "\u2567", + "SF460000": "\u2568", + "SF470000": "\u2564", + "SF480000": "\u2565", + "SF490000": "\u2559", + "SF500000": "\u2558", + "SF510000": "\u2552", + "SF520000": "\u2553", + "SF530000": "\u256B", + "SF540000": "\u256A", + "Sacute": "\u015A", + "Sacutedotaccent": "\u1E64", + "Sampigreek": "\u03E0", + "Scaron": "\u0160", + "Scarondotaccent": "\u1E66", + "Scaronsmall": "\uF6FD", + "Scedilla": "\u015E", + "Schwa": "\u018F", + "Schwacyrillic": "\u04D8", + "Schwadieresiscyrillic": "\u04DA", + "Scircle": "\u24C8", + "Scircumflex": "\u015C", + "Scommaaccent": "\u0218", + "Sdotaccent": "\u1E60", + "Sdotbelow": "\u1E62", + "Sdotbelowdotaccent": "\u1E68", + "Seharmenian": "\u054D", + "Sevenroman": "\u2166", + "Shaarmenian": "\u0547", + "Shacyrillic": "\u0428", + "Shchacyrillic": "\u0429", + "Sheicoptic": "\u03E2", + "Shhacyrillic": "\u04BA", + "Shimacoptic": "\u03EC", + "Sigma": "\u03A3", + "Sixroman": "\u2165", + "Smonospace": "\uFF33", + "Softsigncyrillic": "\u042C", + "Ssmall": "\uF773", + "Stigmagreek": "\u03DA", + "T": "\u0054", + "Tau": "\u03A4", + "Tbar": "\u0166", + "Tcaron": "\u0164", + "Tcedilla": "\u0162", + "Tcircle": "\u24C9", + "Tcircumflexbelow": "\u1E70", + "Tcommaaccent": "\u0162", + "Tdotaccent": "\u1E6A", + "Tdotbelow": "\u1E6C", + "Tecyrillic": "\u0422", + "Tedescendercyrillic": "\u04AC", + "Tenroman": "\u2169", + "Tetsecyrillic": "\u04B4", + "Theta": "\u0398", + "Thook": "\u01AC", + "Thorn": "\u00DE", + "Thornsmall": "\uF7FE", + "Threeroman": "\u2162", + "Tildesmall": "\uF6FE", + "Tiwnarmenian": "\u054F", + "Tlinebelow": "\u1E6E", + "Tmonospace": "\uFF34", + "Toarmenian": "\u0539", + "Tonefive": "\u01BC", + "Tonesix": "\u0184", + "Tonetwo": "\u01A7", + "Tretroflexhook": "\u01AE", + "Tsecyrillic": "\u0426", + "Tshecyrillic": "\u040B", + "Tsmall": "\uF774", + "Twelveroman": "\u216B", + "Tworoman": "\u2161", + "U": "\u0055", + "Uacute": "\u00DA", + "Uacutesmall": "\uF7FA", + "Ubreve": "\u016C", + "Ucaron": "\u01D3", + "Ucircle": "\u24CA", + "Ucircumflex": "\u00DB", + "Ucircumflexbelow": "\u1E76", + "Ucircumflexsmall": "\uF7FB", + "Ucyrillic": "\u0423", + "Udblacute": "\u0170", + "Udblgrave": "\u0214", + "Udieresis": "\u00DC", + "Udieresisacute": "\u01D7", + "Udieresisbelow": "\u1E72", + "Udieresiscaron": "\u01D9", + "Udieresiscyrillic": "\u04F0", + "Udieresisgrave": "\u01DB", + "Udieresismacron": "\u01D5", + "Udieresissmall": "\uF7FC", + "Udotbelow": "\u1EE4", + "Ugrave": "\u00D9", + "Ugravesmall": "\uF7F9", + "Uhookabove": "\u1EE6", + "Uhorn": "\u01AF", + "Uhornacute": "\u1EE8", + "Uhorndotbelow": "\u1EF0", + "Uhorngrave": "\u1EEA", + "Uhornhookabove": "\u1EEC", + "Uhorntilde": "\u1EEE", + "Uhungarumlaut": "\u0170", + "Uhungarumlautcyrillic": "\u04F2", + "Uinvertedbreve": "\u0216", + "Ukcyrillic": "\u0478", + "Umacron": "\u016A", + "Umacroncyrillic": "\u04EE", + "Umacrondieresis": "\u1E7A", + "Umonospace": "\uFF35", + "Uogonek": "\u0172", + "Upsilon": "\u03A5", + "Upsilon1": "\u03D2", + "Upsilonacutehooksymbolgreek": "\u03D3", + "Upsilonafrican": "\u01B1", + "Upsilondieresis": "\u03AB", + "Upsilondieresishooksymbolgreek": "\u03D4", + "Upsilonhooksymbol": "\u03D2", + "Upsilontonos": "\u038E", + "Uring": "\u016E", + "Ushortcyrillic": "\u040E", + "Usmall": "\uF775", + "Ustraightcyrillic": "\u04AE", + "Ustraightstrokecyrillic": "\u04B0", + "Utilde": "\u0168", + "Utildeacute": "\u1E78", + "Utildebelow": "\u1E74", + "V": "\u0056", + "Vcircle": "\u24CB", + "Vdotbelow": "\u1E7E", + "Vecyrillic": "\u0412", + "Vewarmenian": "\u054E", + "Vhook": "\u01B2", + "Vmonospace": "\uFF36", + "Voarmenian": "\u0548", + "Vsmall": "\uF776", + "Vtilde": "\u1E7C", + "W": "\u0057", + "Wacute": "\u1E82", + "Wcircle": "\u24CC", + "Wcircumflex": "\u0174", + "Wdieresis": "\u1E84", + "Wdotaccent": "\u1E86", + "Wdotbelow": "\u1E88", + "Wgrave": "\u1E80", + "Wmonospace": "\uFF37", + "Wsmall": "\uF777", + "X": "\u0058", + "Xcircle": "\u24CD", + "Xdieresis": "\u1E8C", + "Xdotaccent": "\u1E8A", + "Xeharmenian": "\u053D", + "Xi": "\u039E", + "Xmonospace": "\uFF38", + "Xsmall": "\uF778", + "Y": "\u0059", + "Yacute": "\u00DD", + "Yacutesmall": "\uF7FD", + "Yatcyrillic": "\u0462", + "Ycircle": "\u24CE", + "Ycircumflex": "\u0176", + "Ydieresis": "\u0178", + "Ydieresissmall": "\uF7FF", + "Ydotaccent": "\u1E8E", + "Ydotbelow": "\u1EF4", + "Yericyrillic": "\u042B", + "Yerudieresiscyrillic": "\u04F8", + "Ygrave": "\u1EF2", + "Yhook": "\u01B3", + "Yhookabove": "\u1EF6", + "Yiarmenian": "\u0545", + "Yicyrillic": "\u0407", + "Yiwnarmenian": "\u0552", + "Ymonospace": "\uFF39", + "Ysmall": "\uF779", + "Ytilde": "\u1EF8", + "Yusbigcyrillic": "\u046A", + "Yusbigiotifiedcyrillic": "\u046C", + "Yuslittlecyrillic": "\u0466", + "Yuslittleiotifiedcyrillic": "\u0468", + "Z": "\u005A", + "Zaarmenian": "\u0536", + "Zacute": "\u0179", + "Zcaron": "\u017D", + "Zcaronsmall": "\uF6FF", + "Zcircle": "\u24CF", + "Zcircumflex": "\u1E90", + "Zdot": "\u017B", + "Zdotaccent": "\u017B", + "Zdotbelow": "\u1E92", + "Zecyrillic": "\u0417", + "Zedescendercyrillic": "\u0498", + "Zedieresiscyrillic": "\u04DE", + "Zeta": "\u0396", + "Zhearmenian": "\u053A", + "Zhebrevecyrillic": "\u04C1", + "Zhecyrillic": "\u0416", + "Zhedescendercyrillic": "\u0496", + "Zhedieresiscyrillic": "\u04DC", + "Zlinebelow": "\u1E94", + "Zmonospace": "\uFF3A", + "Zsmall": "\uF77A", + "Zstroke": "\u01B5", + "a": "\u0061", + "aabengali": "\u0986", + "aacute": "\u00E1", + "aadeva": "\u0906", + "aagujarati": "\u0A86", + "aagurmukhi": "\u0A06", + "aamatragurmukhi": "\u0A3E", + "aarusquare": "\u3303", + "aavowelsignbengali": "\u09BE", + "aavowelsigndeva": "\u093E", + "aavowelsigngujarati": "\u0ABE", + "abbreviationmarkarmenian": "\u055F", + "abbreviationsigndeva": "\u0970", + "abengali": "\u0985", + "abopomofo": "\u311A", + "abreve": "\u0103", + "abreveacute": "\u1EAF", + "abrevecyrillic": "\u04D1", + "abrevedotbelow": "\u1EB7", + "abrevegrave": "\u1EB1", + "abrevehookabove": "\u1EB3", + "abrevetilde": "\u1EB5", + "acaron": "\u01CE", + "acircle": "\u24D0", + "acircumflex": "\u00E2", + "acircumflexacute": "\u1EA5", + "acircumflexdotbelow": "\u1EAD", + "acircumflexgrave": "\u1EA7", + "acircumflexhookabove": "\u1EA9", + "acircumflextilde": "\u1EAB", + "acute": "\u00B4", + "acutebelowcmb": "\u0317", + "acutecmb": "\u0301", + "acutecomb": "\u0301", + "acutedeva": "\u0954", + "acutelowmod": "\u02CF", + "acutetonecmb": "\u0341", + "acyrillic": "\u0430", + "adblgrave": "\u0201", + "addakgurmukhi": "\u0A71", + "adeva": "\u0905", + "adieresis": "\u00E4", + "adieresiscyrillic": "\u04D3", + "adieresismacron": "\u01DF", + "adotbelow": "\u1EA1", + "adotmacron": "\u01E1", + "ae": "\u00E6", + "aeacute": "\u01FD", + "aekorean": "\u3150", + "aemacron": "\u01E3", + "afii00208": "\u2015", + "afii08941": "\u20A4", + "afii10017": "\u0410", + "afii10018": "\u0411", + "afii10019": "\u0412", + "afii10020": "\u0413", + "afii10021": "\u0414", + "afii10022": "\u0415", + "afii10023": "\u0401", + "afii10024": "\u0416", + "afii10025": "\u0417", + "afii10026": "\u0418", + "afii10027": "\u0419", + "afii10028": "\u041A", + "afii10029": "\u041B", + "afii10030": "\u041C", + "afii10031": "\u041D", + "afii10032": "\u041E", + "afii10033": "\u041F", + "afii10034": "\u0420", + "afii10035": "\u0421", + "afii10036": "\u0422", + "afii10037": "\u0423", + "afii10038": "\u0424", + "afii10039": "\u0425", + "afii10040": "\u0426", + "afii10041": "\u0427", + "afii10042": "\u0428", + "afii10043": "\u0429", + "afii10044": "\u042A", + "afii10045": "\u042B", + "afii10046": "\u042C", + "afii10047": "\u042D", + "afii10048": "\u042E", + "afii10049": "\u042F", + "afii10050": "\u0490", + "afii10051": "\u0402", + "afii10052": "\u0403", + "afii10053": "\u0404", + "afii10054": "\u0405", + "afii10055": "\u0406", + "afii10056": "\u0407", + "afii10057": "\u0408", + "afii10058": "\u0409", + "afii10059": "\u040A", + "afii10060": "\u040B", + "afii10061": "\u040C", + "afii10062": "\u040E", + "afii10063": "\uF6C4", + "afii10064": "\uF6C5", + "afii10065": "\u0430", + "afii10066": "\u0431", + "afii10067": "\u0432", + "afii10068": "\u0433", + "afii10069": "\u0434", + "afii10070": "\u0435", + "afii10071": "\u0451", + "afii10072": "\u0436", + "afii10073": "\u0437", + "afii10074": "\u0438", + "afii10075": "\u0439", + "afii10076": "\u043A", + "afii10077": "\u043B", + "afii10078": "\u043C", + "afii10079": "\u043D", + "afii10080": "\u043E", + "afii10081": "\u043F", + "afii10082": "\u0440", + "afii10083": "\u0441", + "afii10084": "\u0442", + "afii10085": "\u0443", + "afii10086": "\u0444", + "afii10087": "\u0445", + "afii10088": "\u0446", + "afii10089": "\u0447", + "afii10090": "\u0448", + "afii10091": "\u0449", + "afii10092": "\u044A", + "afii10093": "\u044B", + "afii10094": "\u044C", + "afii10095": "\u044D", + "afii10096": "\u044E", + "afii10097": "\u044F", + "afii10098": "\u0491", + "afii10099": "\u0452", + "afii10100": "\u0453", + "afii10101": "\u0454", + "afii10102": "\u0455", + "afii10103": "\u0456", + "afii10104": "\u0457", + "afii10105": "\u0458", + "afii10106": "\u0459", + "afii10107": "\u045A", + "afii10108": "\u045B", + "afii10109": "\u045C", + "afii10110": "\u045E", + "afii10145": "\u040F", + "afii10146": "\u0462", + "afii10147": "\u0472", + "afii10148": "\u0474", + "afii10192": "\uF6C6", + "afii10193": "\u045F", + "afii10194": "\u0463", + "afii10195": "\u0473", + "afii10196": "\u0475", + "afii10831": "\uF6C7", + "afii10832": "\uF6C8", + "afii10846": "\u04D9", + "afii299": "\u200E", + "afii300": "\u200F", + "afii301": "\u200D", + "afii57381": "\u066A", + "afii57388": "\u060C", + "afii57392": "\u0660", + "afii57393": "\u0661", + "afii57394": "\u0662", + "afii57395": "\u0663", + "afii57396": "\u0664", + "afii57397": "\u0665", + "afii57398": "\u0666", + "afii57399": "\u0667", + "afii57400": "\u0668", + "afii57401": "\u0669", + "afii57403": "\u061B", + "afii57407": "\u061F", + "afii57409": "\u0621", + "afii57410": "\u0622", + "afii57411": "\u0623", + "afii57412": "\u0624", + "afii57413": "\u0625", + "afii57414": "\u0626", + "afii57415": "\u0627", + "afii57416": "\u0628", + "afii57417": "\u0629", + "afii57418": "\u062A", + "afii57419": "\u062B", + "afii57420": "\u062C", + "afii57421": "\u062D", + "afii57422": "\u062E", + "afii57423": "\u062F", + "afii57424": "\u0630", + "afii57425": "\u0631", + "afii57426": "\u0632", + "afii57427": "\u0633", + "afii57428": "\u0634", + "afii57429": "\u0635", + "afii57430": "\u0636", + "afii57431": "\u0637", + "afii57432": "\u0638", + "afii57433": "\u0639", + "afii57434": "\u063A", + "afii57440": "\u0640", + "afii57441": "\u0641", + "afii57442": "\u0642", + "afii57443": "\u0643", + "afii57444": "\u0644", + "afii57445": "\u0645", + "afii57446": "\u0646", + "afii57448": "\u0648", + "afii57449": "\u0649", + "afii57450": "\u064A", + "afii57451": "\u064B", + "afii57452": "\u064C", + "afii57453": "\u064D", + "afii57454": "\u064E", + "afii57455": "\u064F", + "afii57456": "\u0650", + "afii57457": "\u0651", + "afii57458": "\u0652", + "afii57470": "\u0647", + "afii57505": "\u06A4", + "afii57506": "\u067E", + "afii57507": "\u0686", + "afii57508": "\u0698", + "afii57509": "\u06AF", + "afii57511": "\u0679", + "afii57512": "\u0688", + "afii57513": "\u0691", + "afii57514": "\u06BA", + "afii57519": "\u06D2", + "afii57534": "\u06D5", + "afii57636": "\u20AA", + "afii57645": "\u05BE", + "afii57658": "\u05C3", + "afii57664": "\u05D0", + "afii57665": "\u05D1", + "afii57666": "\u05D2", + "afii57667": "\u05D3", + "afii57668": "\u05D4", + "afii57669": "\u05D5", + "afii57670": "\u05D6", + "afii57671": "\u05D7", + "afii57672": "\u05D8", + "afii57673": "\u05D9", + "afii57674": "\u05DA", + "afii57675": "\u05DB", + "afii57676": "\u05DC", + "afii57677": "\u05DD", + "afii57678": "\u05DE", + "afii57679": "\u05DF", + "afii57680": "\u05E0", + "afii57681": "\u05E1", + "afii57682": "\u05E2", + "afii57683": "\u05E3", + "afii57684": "\u05E4", + "afii57685": "\u05E5", + "afii57686": "\u05E6", + "afii57687": "\u05E7", + "afii57688": "\u05E8", + "afii57689": "\u05E9", + "afii57690": "\u05EA", + "afii57694": "\uFB2A", + "afii57695": "\uFB2B", + "afii57700": "\uFB4B", + "afii57705": "\uFB1F", + "afii57716": "\u05F0", + "afii57717": "\u05F1", + "afii57718": "\u05F2", + "afii57723": "\uFB35", + "afii57793": "\u05B4", + "afii57794": "\u05B5", + "afii57795": "\u05B6", + "afii57796": "\u05BB", + "afii57797": "\u05B8", + "afii57798": "\u05B7", + "afii57799": "\u05B0", + "afii57800": "\u05B2", + "afii57801": "\u05B1", + "afii57802": "\u05B3", + "afii57803": "\u05C2", + "afii57804": "\u05C1", + "afii57806": "\u05B9", + "afii57807": "\u05BC", + "afii57839": "\u05BD", + "afii57841": "\u05BF", + "afii57842": "\u05C0", + "afii57929": "\u02BC", + "afii61248": "\u2105", + "afii61289": "\u2113", + "afii61352": "\u2116", + "afii61573": "\u202C", + "afii61574": "\u202D", + "afii61575": "\u202E", + "afii61664": "\u200C", + "afii63167": "\u066D", + "afii64937": "\u02BD", + "agrave": "\u00E0", + "agujarati": "\u0A85", + "agurmukhi": "\u0A05", + "ahiragana": "\u3042", + "ahookabove": "\u1EA3", + "aibengali": "\u0990", + "aibopomofo": "\u311E", + "aideva": "\u0910", + "aiecyrillic": "\u04D5", + "aigujarati": "\u0A90", + "aigurmukhi": "\u0A10", + "aimatragurmukhi": "\u0A48", + "ainarabic": "\u0639", + "ainfinalarabic": "\uFECA", + "aininitialarabic": "\uFECB", + "ainmedialarabic": "\uFECC", + "ainvertedbreve": "\u0203", + "aivowelsignbengali": "\u09C8", + "aivowelsigndeva": "\u0948", + "aivowelsigngujarati": "\u0AC8", + "akatakana": "\u30A2", + "akatakanahalfwidth": "\uFF71", + "akorean": "\u314F", + "alef": "\u05D0", + "alefarabic": "\u0627", + "alefdageshhebrew": "\uFB30", + "aleffinalarabic": "\uFE8E", + "alefhamzaabovearabic": "\u0623", + "alefhamzaabovefinalarabic": "\uFE84", + "alefhamzabelowarabic": "\u0625", + "alefhamzabelowfinalarabic": "\uFE88", + "alefhebrew": "\u05D0", + "aleflamedhebrew": "\uFB4F", + "alefmaddaabovearabic": "\u0622", + "alefmaddaabovefinalarabic": "\uFE82", + "alefmaksuraarabic": "\u0649", + "alefmaksurafinalarabic": "\uFEF0", + "alefmaksurainitialarabic": "\uFEF3", + "alefmaksuramedialarabic": "\uFEF4", + "alefpatahhebrew": "\uFB2E", + "alefqamatshebrew": "\uFB2F", + "aleph": "\u2135", + "allequal": "\u224C", + "alpha": "\u03B1", + "alphatonos": "\u03AC", + "amacron": "\u0101", + "amonospace": "\uFF41", + "ampersand": "\u0026", + "ampersandmonospace": "\uFF06", + "ampersandsmall": "\uF726", + "amsquare": "\u33C2", + "anbopomofo": "\u3122", + "angbopomofo": "\u3124", + "angkhankhuthai": "\u0E5A", + "angle": "\u2220", + "anglebracketleft": "\u3008", + "anglebracketleftvertical": "\uFE3F", + "anglebracketright": "\u3009", + "anglebracketrightvertical": "\uFE40", + "angleleft": "\u2329", + "angleright": "\u232A", + "angstrom": "\u212B", + "anoteleia": "\u0387", + "anudattadeva": "\u0952", + "anusvarabengali": "\u0982", + "anusvaradeva": "\u0902", + "anusvaragujarati": "\u0A82", + "aogonek": "\u0105", + "apaatosquare": "\u3300", + "aparen": "\u249C", + "apostrophearmenian": "\u055A", + "apostrophemod": "\u02BC", + "apple": "\uF8FF", + "approaches": "\u2250", + "approxequal": "\u2248", + "approxequalorimage": "\u2252", + "approximatelyequal": "\u2245", + "araeaekorean": "\u318E", + "araeakorean": "\u318D", + "arc": "\u2312", + "arighthalfring": "\u1E9A", + "aring": "\u00E5", + "aringacute": "\u01FB", + "aringbelow": "\u1E01", + "arrowboth": "\u2194", + "arrowdashdown": "\u21E3", + "arrowdashleft": "\u21E0", + "arrowdashright": "\u21E2", + "arrowdashup": "\u21E1", + "arrowdblboth": "\u21D4", + "arrowdbldown": "\u21D3", + "arrowdblleft": "\u21D0", + "arrowdblright": "\u21D2", + "arrowdblup": "\u21D1", + "arrowdown": "\u2193", + "arrowdownleft": "\u2199", + "arrowdownright": "\u2198", + "arrowdownwhite": "\u21E9", + "arrowheaddownmod": "\u02C5", + "arrowheadleftmod": "\u02C2", + "arrowheadrightmod": "\u02C3", + "arrowheadupmod": "\u02C4", + "arrowhorizex": "\uF8E7", + "arrowleft": "\u2190", + "arrowleftdbl": "\u21D0", + "arrowleftdblstroke": "\u21CD", + "arrowleftoverright": "\u21C6", + "arrowleftwhite": "\u21E6", + "arrowright": "\u2192", + "arrowrightdblstroke": "\u21CF", + "arrowrightheavy": "\u279E", + "arrowrightoverleft": "\u21C4", + "arrowrightwhite": "\u21E8", + "arrowtableft": "\u21E4", + "arrowtabright": "\u21E5", + "arrowup": "\u2191", + "arrowupdn": "\u2195", + "arrowupdnbse": "\u21A8", + "arrowupdownbase": "\u21A8", + "arrowupleft": "\u2196", + "arrowupleftofdown": "\u21C5", + "arrowupright": "\u2197", + "arrowupwhite": "\u21E7", + "arrowvertex": "\uF8E6", + "asciicircum": "\u005E", + "asciicircummonospace": "\uFF3E", + "asciitilde": "\u007E", + "asciitildemonospace": "\uFF5E", + "ascript": "\u0251", + "ascriptturned": "\u0252", + "asmallhiragana": "\u3041", + "asmallkatakana": "\u30A1", + "asmallkatakanahalfwidth": "\uFF67", + "asterisk": "\u002A", + "asteriskaltonearabic": "\u066D", + "asteriskarabic": "\u066D", + "asteriskmath": "\u2217", + "asteriskmonospace": "\uFF0A", + "asterisksmall": "\uFE61", + "asterism": "\u2042", + "asuperior": "\uF6E9", + "asymptoticallyequal": "\u2243", + "at": "\u0040", + "atilde": "\u00E3", + "atmonospace": "\uFF20", + "atsmall": "\uFE6B", + "aturned": "\u0250", + "aubengali": "\u0994", + "aubopomofo": "\u3120", + "audeva": "\u0914", + "augujarati": "\u0A94", + "augurmukhi": "\u0A14", + "aulengthmarkbengali": "\u09D7", + "aumatragurmukhi": "\u0A4C", + "auvowelsignbengali": "\u09CC", + "auvowelsigndeva": "\u094C", + "auvowelsigngujarati": "\u0ACC", + "avagrahadeva": "\u093D", + "aybarmenian": "\u0561", + "ayin": "\u05E2", + "ayinaltonehebrew": "\uFB20", + "ayinhebrew": "\u05E2", + "b": "\u0062", + "babengali": "\u09AC", + "backslash": "\u005C", + "backslashmonospace": "\uFF3C", + "badeva": "\u092C", + "bagujarati": "\u0AAC", + "bagurmukhi": "\u0A2C", + "bahiragana": "\u3070", + "bahtthai": "\u0E3F", + "bakatakana": "\u30D0", + "bar": "\u007C", + "barmonospace": "\uFF5C", + "bbopomofo": "\u3105", + "bcircle": "\u24D1", + "bdotaccent": "\u1E03", + "bdotbelow": "\u1E05", + "beamedsixteenthnotes": "\u266C", + "because": "\u2235", + "becyrillic": "\u0431", + "beharabic": "\u0628", + "behfinalarabic": "\uFE90", + "behinitialarabic": "\uFE91", + "behiragana": "\u3079", + "behmedialarabic": "\uFE92", + "behmeeminitialarabic": "\uFC9F", + "behmeemisolatedarabic": "\uFC08", + "behnoonfinalarabic": "\uFC6D", + "bekatakana": "\u30D9", + "benarmenian": "\u0562", + "bet": "\u05D1", + "beta": "\u03B2", + "betasymbolgreek": "\u03D0", + "betdagesh": "\uFB31", + "betdageshhebrew": "\uFB31", + "bethebrew": "\u05D1", + "betrafehebrew": "\uFB4C", + "bhabengali": "\u09AD", + "bhadeva": "\u092D", + "bhagujarati": "\u0AAD", + "bhagurmukhi": "\u0A2D", + "bhook": "\u0253", + "bihiragana": "\u3073", + "bikatakana": "\u30D3", + "bilabialclick": "\u0298", + "bindigurmukhi": "\u0A02", + "birusquare": "\u3331", + "blackcircle": "\u25CF", + "blackdiamond": "\u25C6", + "blackdownpointingtriangle": "\u25BC", + "blackleftpointingpointer": "\u25C4", + "blackleftpointingtriangle": "\u25C0", + "blacklenticularbracketleft": "\u3010", + "blacklenticularbracketleftvertical": "\uFE3B", + "blacklenticularbracketright": "\u3011", + "blacklenticularbracketrightvertical": "\uFE3C", + "blacklowerlefttriangle": "\u25E3", + "blacklowerrighttriangle": "\u25E2", + "blackrectangle": "\u25AC", + "blackrightpointingpointer": "\u25BA", + "blackrightpointingtriangle": "\u25B6", + "blacksmallsquare": "\u25AA", + "blacksmilingface": "\u263B", + "blacksquare": "\u25A0", + "blackstar": "\u2605", + "blackupperlefttriangle": "\u25E4", + "blackupperrighttriangle": "\u25E5", + "blackuppointingsmalltriangle": "\u25B4", + "blackuppointingtriangle": "\u25B2", + "blank": "\u2423", + "blinebelow": "\u1E07", + "block": "\u2588", + "bmonospace": "\uFF42", + "bobaimaithai": "\u0E1A", + "bohiragana": "\u307C", + "bokatakana": "\u30DC", + "bparen": "\u249D", + "bqsquare": "\u33C3", + "braceex": "\uF8F4", + "braceleft": "\u007B", + "braceleftbt": "\uF8F3", + "braceleftmid": "\uF8F2", + "braceleftmonospace": "\uFF5B", + "braceleftsmall": "\uFE5B", + "bracelefttp": "\uF8F1", + "braceleftvertical": "\uFE37", + "braceright": "\u007D", + "bracerightbt": "\uF8FE", + "bracerightmid": "\uF8FD", + "bracerightmonospace": "\uFF5D", + "bracerightsmall": "\uFE5C", + "bracerighttp": "\uF8FC", + "bracerightvertical": "\uFE38", + "bracketleft": "\u005B", + "bracketleftbt": "\uF8F0", + "bracketleftex": "\uF8EF", + "bracketleftmonospace": "\uFF3B", + "bracketlefttp": "\uF8EE", + "bracketright": "\u005D", + "bracketrightbt": "\uF8FB", + "bracketrightex": "\uF8FA", + "bracketrightmonospace": "\uFF3D", + "bracketrighttp": "\uF8F9", + "breve": "\u02D8", + "brevebelowcmb": "\u032E", + "brevecmb": "\u0306", + "breveinvertedbelowcmb": "\u032F", + "breveinvertedcmb": "\u0311", + "breveinverteddoublecmb": "\u0361", + "bridgebelowcmb": "\u032A", + "bridgeinvertedbelowcmb": "\u033A", + "brokenbar": "\u00A6", + "bstroke": "\u0180", + "bsuperior": "\uF6EA", + "btopbar": "\u0183", + "buhiragana": "\u3076", + "bukatakana": "\u30D6", + "bullet": "\u2022", + "bulletinverse": "\u25D8", + "bulletoperator": "\u2219", + "bullseye": "\u25CE", + "c": "\u0063", + "caarmenian": "\u056E", + "cabengali": "\u099A", + "cacute": "\u0107", + "cadeva": "\u091A", + "cagujarati": "\u0A9A", + "cagurmukhi": "\u0A1A", + "calsquare": "\u3388", + "candrabindubengali": "\u0981", + "candrabinducmb": "\u0310", + "candrabindudeva": "\u0901", + "candrabindugujarati": "\u0A81", + "capslock": "\u21EA", + "careof": "\u2105", + "caron": "\u02C7", + "caronbelowcmb": "\u032C", + "caroncmb": "\u030C", + "carriagereturn": "\u21B5", + "cbopomofo": "\u3118", + "ccaron": "\u010D", + "ccedilla": "\u00E7", + "ccedillaacute": "\u1E09", + "ccircle": "\u24D2", + "ccircumflex": "\u0109", + "ccurl": "\u0255", + "cdot": "\u010B", + "cdotaccent": "\u010B", + "cdsquare": "\u33C5", + "cedilla": "\u00B8", + "cedillacmb": "\u0327", + "cent": "\u00A2", + "centigrade": "\u2103", + "centinferior": "\uF6DF", + "centmonospace": "\uFFE0", + "centoldstyle": "\uF7A2", + "centsuperior": "\uF6E0", + "chaarmenian": "\u0579", + "chabengali": "\u099B", + "chadeva": "\u091B", + "chagujarati": "\u0A9B", + "chagurmukhi": "\u0A1B", + "chbopomofo": "\u3114", + "cheabkhasiancyrillic": "\u04BD", + "checkmark": "\u2713", + "checyrillic": "\u0447", + "chedescenderabkhasiancyrillic": "\u04BF", + "chedescendercyrillic": "\u04B7", + "chedieresiscyrillic": "\u04F5", + "cheharmenian": "\u0573", + "chekhakassiancyrillic": "\u04CC", + "cheverticalstrokecyrillic": "\u04B9", + "chi": "\u03C7", + "chieuchacirclekorean": "\u3277", + "chieuchaparenkorean": "\u3217", + "chieuchcirclekorean": "\u3269", + "chieuchkorean": "\u314A", + "chieuchparenkorean": "\u3209", + "chochangthai": "\u0E0A", + "chochanthai": "\u0E08", + "chochingthai": "\u0E09", + "chochoethai": "\u0E0C", + "chook": "\u0188", + "cieucacirclekorean": "\u3276", + "cieucaparenkorean": "\u3216", + "cieuccirclekorean": "\u3268", + "cieuckorean": "\u3148", + "cieucparenkorean": "\u3208", + "cieucuparenkorean": "\u321C", + "circle": "\u25CB", + "circlemultiply": "\u2297", + "circleot": "\u2299", + "circleplus": "\u2295", + "circlepostalmark": "\u3036", + "circlewithlefthalfblack": "\u25D0", + "circlewithrighthalfblack": "\u25D1", + "circumflex": "\u02C6", + "circumflexbelowcmb": "\u032D", + "circumflexcmb": "\u0302", + "clear": "\u2327", + "clickalveolar": "\u01C2", + "clickdental": "\u01C0", + "clicklateral": "\u01C1", + "clickretroflex": "\u01C3", + "club": "\u2663", + "clubsuitblack": "\u2663", + "clubsuitwhite": "\u2667", + "cmcubedsquare": "\u33A4", + "cmonospace": "\uFF43", + "cmsquaredsquare": "\u33A0", + "coarmenian": "\u0581", + "colon": "\u003A", + "colonmonetary": "\u20A1", + "colonmonospace": "\uFF1A", + "colonsign": "\u20A1", + "colonsmall": "\uFE55", + "colontriangularhalfmod": "\u02D1", + "colontriangularmod": "\u02D0", + "comma": "\u002C", + "commaabovecmb": "\u0313", + "commaaboverightcmb": "\u0315", + "commaaccent": "\uF6C3", + "commaarabic": "\u060C", + "commaarmenian": "\u055D", + "commainferior": "\uF6E1", + "commamonospace": "\uFF0C", + "commareversedabovecmb": "\u0314", + "commareversedmod": "\u02BD", + "commasmall": "\uFE50", + "commasuperior": "\uF6E2", + "commaturnedabovecmb": "\u0312", + "commaturnedmod": "\u02BB", + "compass": "\u263C", + "congruent": "\u2245", + "contourintegral": "\u222E", + "control": "\u2303", + "controlACK": "\u0006", + "controlBEL": "\u0007", + "controlBS": "\u0008", + "controlCAN": "\u0018", + "controlCR": "\u000D", + "controlDC1": "\u0011", + "controlDC2": "\u0012", + "controlDC3": "\u0013", + "controlDC4": "\u0014", + "controlDEL": "\u007F", + "controlDLE": "\u0010", + "controlEM": "\u0019", + "controlENQ": "\u0005", + "controlEOT": "\u0004", + "controlESC": "\u001B", + "controlETB": "\u0017", + "controlETX": "\u0003", + "controlFF": "\u000C", + "controlFS": "\u001C", + "controlGS": "\u001D", + "controlHT": "\u0009", + "controlLF": "\u000A", + "controlNAK": "\u0015", + "controlRS": "\u001E", + "controlSI": "\u000F", + "controlSO": "\u000E", + "controlSOT": "\u0002", + "controlSTX": "\u0001", + "controlSUB": "\u001A", + "controlSYN": "\u0016", + "controlUS": "\u001F", + "controlVT": "\u000B", + "copyright": "\u00A9", + "copyrightsans": "\uF8E9", + "copyrightserif": "\uF6D9", + "cornerbracketleft": "\u300C", + "cornerbracketlefthalfwidth": "\uFF62", + "cornerbracketleftvertical": "\uFE41", + "cornerbracketright": "\u300D", + "cornerbracketrighthalfwidth": "\uFF63", + "cornerbracketrightvertical": "\uFE42", + "corporationsquare": "\u337F", + "cosquare": "\u33C7", + "coverkgsquare": "\u33C6", + "cparen": "\u249E", + "cruzeiro": "\u20A2", + "cstretched": "\u0297", + "curlyand": "\u22CF", + "curlyor": "\u22CE", + "currency": "\u00A4", + "cyrBreve": "\uF6D1", + "cyrFlex": "\uF6D2", + "cyrbreve": "\uF6D4", + "cyrflex": "\uF6D5", + "d": "\u0064", + "daarmenian": "\u0564", + "dabengali": "\u09A6", + "dadarabic": "\u0636", + "dadeva": "\u0926", + "dadfinalarabic": "\uFEBE", + "dadinitialarabic": "\uFEBF", + "dadmedialarabic": "\uFEC0", + "dagesh": "\u05BC", + "dageshhebrew": "\u05BC", + "dagger": "\u2020", + "daggerdbl": "\u2021", + "dagujarati": "\u0AA6", + "dagurmukhi": "\u0A26", + "dahiragana": "\u3060", + "dakatakana": "\u30C0", + "dalarabic": "\u062F", + "dalet": "\u05D3", + "daletdagesh": "\uFB33", + "daletdageshhebrew": "\uFB33", + "dalethatafpatah": "\u05D3\u05B2", + "dalethatafpatahhebrew": "\u05D3\u05B2", + "dalethatafsegol": "\u05D3\u05B1", + "dalethatafsegolhebrew": "\u05D3\u05B1", + "dalethebrew": "\u05D3", + "dalethiriq": "\u05D3\u05B4", + "dalethiriqhebrew": "\u05D3\u05B4", + "daletholam": "\u05D3\u05B9", + "daletholamhebrew": "\u05D3\u05B9", + "daletpatah": "\u05D3\u05B7", + "daletpatahhebrew": "\u05D3\u05B7", + "daletqamats": "\u05D3\u05B8", + "daletqamatshebrew": "\u05D3\u05B8", + "daletqubuts": "\u05D3\u05BB", + "daletqubutshebrew": "\u05D3\u05BB", + "daletsegol": "\u05D3\u05B6", + "daletsegolhebrew": "\u05D3\u05B6", + "daletsheva": "\u05D3\u05B0", + "daletshevahebrew": "\u05D3\u05B0", + "dalettsere": "\u05D3\u05B5", + "dalettserehebrew": "\u05D3\u05B5", + "dalfinalarabic": "\uFEAA", + "dammaarabic": "\u064F", + "dammalowarabic": "\u064F", + "dammatanaltonearabic": "\u064C", + "dammatanarabic": "\u064C", + "danda": "\u0964", + "dargahebrew": "\u05A7", + "dargalefthebrew": "\u05A7", + "dasiapneumatacyrilliccmb": "\u0485", + "dblGrave": "\uF6D3", + "dblanglebracketleft": "\u300A", + "dblanglebracketleftvertical": "\uFE3D", + "dblanglebracketright": "\u300B", + "dblanglebracketrightvertical": "\uFE3E", + "dblarchinvertedbelowcmb": "\u032B", + "dblarrowleft": "\u21D4", + "dblarrowright": "\u21D2", + "dbldanda": "\u0965", + "dblgrave": "\uF6D6", + "dblgravecmb": "\u030F", + "dblintegral": "\u222C", + "dbllowline": "\u2017", + "dbllowlinecmb": "\u0333", + "dbloverlinecmb": "\u033F", + "dblprimemod": "\u02BA", + "dblverticalbar": "\u2016", + "dblverticallineabovecmb": "\u030E", + "dbopomofo": "\u3109", + "dbsquare": "\u33C8", + "dcaron": "\u010F", + "dcedilla": "\u1E11", + "dcircle": "\u24D3", + "dcircumflexbelow": "\u1E13", + "dcroat": "\u0111", + "ddabengali": "\u09A1", + "ddadeva": "\u0921", + "ddagujarati": "\u0AA1", + "ddagurmukhi": "\u0A21", + "ddalarabic": "\u0688", + "ddalfinalarabic": "\uFB89", + "dddhadeva": "\u095C", + "ddhabengali": "\u09A2", + "ddhadeva": "\u0922", + "ddhagujarati": "\u0AA2", + "ddhagurmukhi": "\u0A22", + "ddotaccent": "\u1E0B", + "ddotbelow": "\u1E0D", + "decimalseparatorarabic": "\u066B", + "decimalseparatorpersian": "\u066B", + "decyrillic": "\u0434", + "degree": "\u00B0", + "dehihebrew": "\u05AD", + "dehiragana": "\u3067", + "deicoptic": "\u03EF", + "dekatakana": "\u30C7", + "deleteleft": "\u232B", + "deleteright": "\u2326", + "delta": "\u03B4", + "deltaturned": "\u018D", + "denominatorminusonenumeratorbengali": "\u09F8", + "dezh": "\u02A4", + "dhabengali": "\u09A7", + "dhadeva": "\u0927", + "dhagujarati": "\u0AA7", + "dhagurmukhi": "\u0A27", + "dhook": "\u0257", + "dialytikatonos": "\u0385", + "dialytikatonoscmb": "\u0344", + "diamond": "\u2666", + "diamondsuitwhite": "\u2662", + "dieresis": "\u00A8", + "dieresisacute": "\uF6D7", + "dieresisbelowcmb": "\u0324", + "dieresiscmb": "\u0308", + "dieresisgrave": "\uF6D8", + "dieresistonos": "\u0385", + "dihiragana": "\u3062", + "dikatakana": "\u30C2", + "dittomark": "\u3003", + "divide": "\u00F7", + "divides": "\u2223", + "divisionslash": "\u2215", + "djecyrillic": "\u0452", + "dkshade": "\u2593", + "dlinebelow": "\u1E0F", + "dlsquare": "\u3397", + "dmacron": "\u0111", + "dmonospace": "\uFF44", + "dnblock": "\u2584", + "dochadathai": "\u0E0E", + "dodekthai": "\u0E14", + "dohiragana": "\u3069", + "dokatakana": "\u30C9", + "dollar": "\u0024", + "dollarinferior": "\uF6E3", + "dollarmonospace": "\uFF04", + "dollaroldstyle": "\uF724", + "dollarsmall": "\uFE69", + "dollarsuperior": "\uF6E4", + "dong": "\u20AB", + "dorusquare": "\u3326", + "dotaccent": "\u02D9", + "dotaccentcmb": "\u0307", + "dotbelowcmb": "\u0323", + "dotbelowcomb": "\u0323", + "dotkatakana": "\u30FB", + "dotlessi": "\u0131", + "dotlessj": "\uF6BE", + "dotlessjstrokehook": "\u0284", + "dotmath": "\u22C5", + "dottedcircle": "\u25CC", + "doubleyodpatah": "\uFB1F", + "doubleyodpatahhebrew": "\uFB1F", + "downtackbelowcmb": "\u031E", + "downtackmod": "\u02D5", + "dparen": "\u249F", + "dsuperior": "\uF6EB", + "dtail": "\u0256", + "dtopbar": "\u018C", + "duhiragana": "\u3065", + "dukatakana": "\u30C5", + "dz": "\u01F3", + "dzaltone": "\u02A3", + "dzcaron": "\u01C6", + "dzcurl": "\u02A5", + "dzeabkhasiancyrillic": "\u04E1", + "dzecyrillic": "\u0455", + "dzhecyrillic": "\u045F", + "e": "\u0065", + "eacute": "\u00E9", + "earth": "\u2641", + "ebengali": "\u098F", + "ebopomofo": "\u311C", + "ebreve": "\u0115", + "ecandradeva": "\u090D", + "ecandragujarati": "\u0A8D", + "ecandravowelsigndeva": "\u0945", + "ecandravowelsigngujarati": "\u0AC5", + "ecaron": "\u011B", + "ecedillabreve": "\u1E1D", + "echarmenian": "\u0565", + "echyiwnarmenian": "\u0587", + "ecircle": "\u24D4", + "ecircumflex": "\u00EA", + "ecircumflexacute": "\u1EBF", + "ecircumflexbelow": "\u1E19", + "ecircumflexdotbelow": "\u1EC7", + "ecircumflexgrave": "\u1EC1", + "ecircumflexhookabove": "\u1EC3", + "ecircumflextilde": "\u1EC5", + "ecyrillic": "\u0454", + "edblgrave": "\u0205", + "edeva": "\u090F", + "edieresis": "\u00EB", + "edot": "\u0117", + "edotaccent": "\u0117", + "edotbelow": "\u1EB9", + "eegurmukhi": "\u0A0F", + "eematragurmukhi": "\u0A47", + "efcyrillic": "\u0444", + "egrave": "\u00E8", + "egujarati": "\u0A8F", + "eharmenian": "\u0567", + "ehbopomofo": "\u311D", + "ehiragana": "\u3048", + "ehookabove": "\u1EBB", + "eibopomofo": "\u311F", + "eight": "\u0038", + "eightarabic": "\u0668", + "eightbengali": "\u09EE", + "eightcircle": "\u2467", + "eightcircleinversesansserif": "\u2791", + "eightdeva": "\u096E", + "eighteencircle": "\u2471", + "eighteenparen": "\u2485", + "eighteenperiod": "\u2499", + "eightgujarati": "\u0AEE", + "eightgurmukhi": "\u0A6E", + "eighthackarabic": "\u0668", + "eighthangzhou": "\u3028", + "eighthnotebeamed": "\u266B", + "eightideographicparen": "\u3227", + "eightinferior": "\u2088", + "eightmonospace": "\uFF18", + "eightoldstyle": "\uF738", + "eightparen": "\u247B", + "eightperiod": "\u248F", + "eightpersian": "\u06F8", + "eightroman": "\u2177", + "eightsuperior": "\u2078", + "eightthai": "\u0E58", + "einvertedbreve": "\u0207", + "eiotifiedcyrillic": "\u0465", + "ekatakana": "\u30A8", + "ekatakanahalfwidth": "\uFF74", + "ekonkargurmukhi": "\u0A74", + "ekorean": "\u3154", + "elcyrillic": "\u043B", + "element": "\u2208", + "elevencircle": "\u246A", + "elevenparen": "\u247E", + "elevenperiod": "\u2492", + "elevenroman": "\u217A", + "ellipsis": "\u2026", + "ellipsisvertical": "\u22EE", + "emacron": "\u0113", + "emacronacute": "\u1E17", + "emacrongrave": "\u1E15", + "emcyrillic": "\u043C", + "emdash": "\u2014", + "emdashvertical": "\uFE31", + "emonospace": "\uFF45", + "emphasismarkarmenian": "\u055B", + "emptyset": "\u2205", + "enbopomofo": "\u3123", + "encyrillic": "\u043D", + "endash": "\u2013", + "endashvertical": "\uFE32", + "endescendercyrillic": "\u04A3", + "eng": "\u014B", + "engbopomofo": "\u3125", + "enghecyrillic": "\u04A5", + "enhookcyrillic": "\u04C8", + "enspace": "\u2002", + "eogonek": "\u0119", + "eokorean": "\u3153", + "eopen": "\u025B", + "eopenclosed": "\u029A", + "eopenreversed": "\u025C", + "eopenreversedclosed": "\u025E", + "eopenreversedhook": "\u025D", + "eparen": "\u24A0", + "epsilon": "\u03B5", + "epsilontonos": "\u03AD", + "equal": "\u003D", + "equalmonospace": "\uFF1D", + "equalsmall": "\uFE66", + "equalsuperior": "\u207C", + "equivalence": "\u2261", + "erbopomofo": "\u3126", + "ercyrillic": "\u0440", + "ereversed": "\u0258", + "ereversedcyrillic": "\u044D", + "escyrillic": "\u0441", + "esdescendercyrillic": "\u04AB", + "esh": "\u0283", + "eshcurl": "\u0286", + "eshortdeva": "\u090E", + "eshortvowelsigndeva": "\u0946", + "eshreversedloop": "\u01AA", + "eshsquatreversed": "\u0285", + "esmallhiragana": "\u3047", + "esmallkatakana": "\u30A7", + "esmallkatakanahalfwidth": "\uFF6A", + "estimated": "\u212E", + "esuperior": "\uF6EC", + "eta": "\u03B7", + "etarmenian": "\u0568", + "etatonos": "\u03AE", + "eth": "\u00F0", + "etilde": "\u1EBD", + "etildebelow": "\u1E1B", + "etnahtafoukhhebrew": "\u0591", + "etnahtafoukhlefthebrew": "\u0591", + "etnahtahebrew": "\u0591", + "etnahtalefthebrew": "\u0591", + "eturned": "\u01DD", + "eukorean": "\u3161", + "euro": "\u20AC", + "evowelsignbengali": "\u09C7", + "evowelsigndeva": "\u0947", + "evowelsigngujarati": "\u0AC7", + "exclam": "\u0021", + "exclamarmenian": "\u055C", + "exclamdbl": "\u203C", + "exclamdown": "\u00A1", + "exclamdownsmall": "\uF7A1", + "exclammonospace": "\uFF01", + "exclamsmall": "\uF721", + "existential": "\u2203", + "ezh": "\u0292", + "ezhcaron": "\u01EF", + "ezhcurl": "\u0293", + "ezhreversed": "\u01B9", + "ezhtail": "\u01BA", + "f": "\u0066", + "fadeva": "\u095E", + "fagurmukhi": "\u0A5E", + "fahrenheit": "\u2109", + "fathaarabic": "\u064E", + "fathalowarabic": "\u064E", + "fathatanarabic": "\u064B", + "fbopomofo": "\u3108", + "fcircle": "\u24D5", + "fdotaccent": "\u1E1F", + "feharabic": "\u0641", + "feharmenian": "\u0586", + "fehfinalarabic": "\uFED2", + "fehinitialarabic": "\uFED3", + "fehmedialarabic": "\uFED4", + "feicoptic": "\u03E5", + "female": "\u2640", + "ff": "\uFB00", + "ffi": "\uFB03", + "ffl": "\uFB04", + "fi": "\uFB01", + "fifteencircle": "\u246E", + "fifteenparen": "\u2482", + "fifteenperiod": "\u2496", + "figuredash": "\u2012", + "filledbox": "\u25A0", + "filledrect": "\u25AC", + "finalkaf": "\u05DA", + "finalkafdagesh": "\uFB3A", + "finalkafdageshhebrew": "\uFB3A", + "finalkafhebrew": "\u05DA", + "finalkafqamats": "\u05DA\u05B8", + "finalkafqamatshebrew": "\u05DA\u05B8", + "finalkafsheva": "\u05DA\u05B0", + "finalkafshevahebrew": "\u05DA\u05B0", + "finalmem": "\u05DD", + "finalmemhebrew": "\u05DD", + "finalnun": "\u05DF", + "finalnunhebrew": "\u05DF", + "finalpe": "\u05E3", + "finalpehebrew": "\u05E3", + "finaltsadi": "\u05E5", + "finaltsadihebrew": "\u05E5", + "firsttonechinese": "\u02C9", + "fisheye": "\u25C9", + "fitacyrillic": "\u0473", + "five": "\u0035", + "fivearabic": "\u0665", + "fivebengali": "\u09EB", + "fivecircle": "\u2464", + "fivecircleinversesansserif": "\u278E", + "fivedeva": "\u096B", + "fiveeighths": "\u215D", + "fivegujarati": "\u0AEB", + "fivegurmukhi": "\u0A6B", + "fivehackarabic": "\u0665", + "fivehangzhou": "\u3025", + "fiveideographicparen": "\u3224", + "fiveinferior": "\u2085", + "fivemonospace": "\uFF15", + "fiveoldstyle": "\uF735", + "fiveparen": "\u2478", + "fiveperiod": "\u248C", + "fivepersian": "\u06F5", + "fiveroman": "\u2174", + "fivesuperior": "\u2075", + "fivethai": "\u0E55", + "fl": "\uFB02", + "florin": "\u0192", + "fmonospace": "\uFF46", + "fmsquare": "\u3399", + "fofanthai": "\u0E1F", + "fofathai": "\u0E1D", + "fongmanthai": "\u0E4F", + "forall": "\u2200", + "four": "\u0034", + "fourarabic": "\u0664", + "fourbengali": "\u09EA", + "fourcircle": "\u2463", + "fourcircleinversesansserif": "\u278D", + "fourdeva": "\u096A", + "fourgujarati": "\u0AEA", + "fourgurmukhi": "\u0A6A", + "fourhackarabic": "\u0664", + "fourhangzhou": "\u3024", + "fourideographicparen": "\u3223", + "fourinferior": "\u2084", + "fourmonospace": "\uFF14", + "fournumeratorbengali": "\u09F7", + "fouroldstyle": "\uF734", + "fourparen": "\u2477", + "fourperiod": "\u248B", + "fourpersian": "\u06F4", + "fourroman": "\u2173", + "foursuperior": "\u2074", + "fourteencircle": "\u246D", + "fourteenparen": "\u2481", + "fourteenperiod": "\u2495", + "fourthai": "\u0E54", + "fourthtonechinese": "\u02CB", + "fparen": "\u24A1", + "fraction": "\u2044", + "franc": "\u20A3", + "g": "\u0067", + "gabengali": "\u0997", + "gacute": "\u01F5", + "gadeva": "\u0917", + "gafarabic": "\u06AF", + "gaffinalarabic": "\uFB93", + "gafinitialarabic": "\uFB94", + "gafmedialarabic": "\uFB95", + "gagujarati": "\u0A97", + "gagurmukhi": "\u0A17", + "gahiragana": "\u304C", + "gakatakana": "\u30AC", + "gamma": "\u03B3", + "gammalatinsmall": "\u0263", + "gammasuperior": "\u02E0", + "gangiacoptic": "\u03EB", + "gbopomofo": "\u310D", + "gbreve": "\u011F", + "gcaron": "\u01E7", + "gcedilla": "\u0123", + "gcircle": "\u24D6", + "gcircumflex": "\u011D", + "gcommaaccent": "\u0123", + "gdot": "\u0121", + "gdotaccent": "\u0121", + "gecyrillic": "\u0433", + "gehiragana": "\u3052", + "gekatakana": "\u30B2", + "geometricallyequal": "\u2251", + "gereshaccenthebrew": "\u059C", + "gereshhebrew": "\u05F3", + "gereshmuqdamhebrew": "\u059D", + "germandbls": "\u00DF", + "gershayimaccenthebrew": "\u059E", + "gershayimhebrew": "\u05F4", + "getamark": "\u3013", + "ghabengali": "\u0998", + "ghadarmenian": "\u0572", + "ghadeva": "\u0918", + "ghagujarati": "\u0A98", + "ghagurmukhi": "\u0A18", + "ghainarabic": "\u063A", + "ghainfinalarabic": "\uFECE", + "ghaininitialarabic": "\uFECF", + "ghainmedialarabic": "\uFED0", + "ghemiddlehookcyrillic": "\u0495", + "ghestrokecyrillic": "\u0493", + "gheupturncyrillic": "\u0491", + "ghhadeva": "\u095A", + "ghhagurmukhi": "\u0A5A", + "ghook": "\u0260", + "ghzsquare": "\u3393", + "gihiragana": "\u304E", + "gikatakana": "\u30AE", + "gimarmenian": "\u0563", + "gimel": "\u05D2", + "gimeldagesh": "\uFB32", + "gimeldageshhebrew": "\uFB32", + "gimelhebrew": "\u05D2", + "gjecyrillic": "\u0453", + "glottalinvertedstroke": "\u01BE", + "glottalstop": "\u0294", + "glottalstopinverted": "\u0296", + "glottalstopmod": "\u02C0", + "glottalstopreversed": "\u0295", + "glottalstopreversedmod": "\u02C1", + "glottalstopreversedsuperior": "\u02E4", + "glottalstopstroke": "\u02A1", + "glottalstopstrokereversed": "\u02A2", + "gmacron": "\u1E21", + "gmonospace": "\uFF47", + "gohiragana": "\u3054", + "gokatakana": "\u30B4", + "gparen": "\u24A2", + "gpasquare": "\u33AC", + "gradient": "\u2207", + "grave": "\u0060", + "gravebelowcmb": "\u0316", + "gravecmb": "\u0300", + "gravecomb": "\u0300", + "gravedeva": "\u0953", + "gravelowmod": "\u02CE", + "gravemonospace": "\uFF40", + "gravetonecmb": "\u0340", + "greater": "\u003E", + "greaterequal": "\u2265", + "greaterequalorless": "\u22DB", + "greatermonospace": "\uFF1E", + "greaterorequivalent": "\u2273", + "greaterorless": "\u2277", + "greateroverequal": "\u2267", + "greatersmall": "\uFE65", + "gscript": "\u0261", + "gstroke": "\u01E5", + "guhiragana": "\u3050", + "guillemotleft": "\u00AB", + "guillemotright": "\u00BB", + "guilsinglleft": "\u2039", + "guilsinglright": "\u203A", + "gukatakana": "\u30B0", + "guramusquare": "\u3318", + "gysquare": "\u33C9", + "h": "\u0068", + "haabkhasiancyrillic": "\u04A9", + "haaltonearabic": "\u06C1", + "habengali": "\u09B9", + "hadescendercyrillic": "\u04B3", + "hadeva": "\u0939", + "hagujarati": "\u0AB9", + "hagurmukhi": "\u0A39", + "haharabic": "\u062D", + "hahfinalarabic": "\uFEA2", + "hahinitialarabic": "\uFEA3", + "hahiragana": "\u306F", + "hahmedialarabic": "\uFEA4", + "haitusquare": "\u332A", + "hakatakana": "\u30CF", + "hakatakanahalfwidth": "\uFF8A", + "halantgurmukhi": "\u0A4D", + "hamzaarabic": "\u0621", + "hamzadammaarabic": "\u0621\u064F", + "hamzadammatanarabic": "\u0621\u064C", + "hamzafathaarabic": "\u0621\u064E", + "hamzafathatanarabic": "\u0621\u064B", + "hamzalowarabic": "\u0621", + "hamzalowkasraarabic": "\u0621\u0650", + "hamzalowkasratanarabic": "\u0621\u064D", + "hamzasukunarabic": "\u0621\u0652", + "hangulfiller": "\u3164", + "hardsigncyrillic": "\u044A", + "harpoonleftbarbup": "\u21BC", + "harpoonrightbarbup": "\u21C0", + "hasquare": "\u33CA", + "hatafpatah": "\u05B2", + "hatafpatah16": "\u05B2", + "hatafpatah23": "\u05B2", + "hatafpatah2f": "\u05B2", + "hatafpatahhebrew": "\u05B2", + "hatafpatahnarrowhebrew": "\u05B2", + "hatafpatahquarterhebrew": "\u05B2", + "hatafpatahwidehebrew": "\u05B2", + "hatafqamats": "\u05B3", + "hatafqamats1b": "\u05B3", + "hatafqamats28": "\u05B3", + "hatafqamats34": "\u05B3", + "hatafqamatshebrew": "\u05B3", + "hatafqamatsnarrowhebrew": "\u05B3", + "hatafqamatsquarterhebrew": "\u05B3", + "hatafqamatswidehebrew": "\u05B3", + "hatafsegol": "\u05B1", + "hatafsegol17": "\u05B1", + "hatafsegol24": "\u05B1", + "hatafsegol30": "\u05B1", + "hatafsegolhebrew": "\u05B1", + "hatafsegolnarrowhebrew": "\u05B1", + "hatafsegolquarterhebrew": "\u05B1", + "hatafsegolwidehebrew": "\u05B1", + "hbar": "\u0127", + "hbopomofo": "\u310F", + "hbrevebelow": "\u1E2B", + "hcedilla": "\u1E29", + "hcircle": "\u24D7", + "hcircumflex": "\u0125", + "hdieresis": "\u1E27", + "hdotaccent": "\u1E23", + "hdotbelow": "\u1E25", + "he": "\u05D4", + "heart": "\u2665", + "heartsuitblack": "\u2665", + "heartsuitwhite": "\u2661", + "hedagesh": "\uFB34", + "hedageshhebrew": "\uFB34", + "hehaltonearabic": "\u06C1", + "heharabic": "\u0647", + "hehebrew": "\u05D4", + "hehfinalaltonearabic": "\uFBA7", + "hehfinalalttwoarabic": "\uFEEA", + "hehfinalarabic": "\uFEEA", + "hehhamzaabovefinalarabic": "\uFBA5", + "hehhamzaaboveisolatedarabic": "\uFBA4", + "hehinitialaltonearabic": "\uFBA8", + "hehinitialarabic": "\uFEEB", + "hehiragana": "\u3078", + "hehmedialaltonearabic": "\uFBA9", + "hehmedialarabic": "\uFEEC", + "heiseierasquare": "\u337B", + "hekatakana": "\u30D8", + "hekatakanahalfwidth": "\uFF8D", + "hekutaarusquare": "\u3336", + "henghook": "\u0267", + "herutusquare": "\u3339", + "het": "\u05D7", + "hethebrew": "\u05D7", + "hhook": "\u0266", + "hhooksuperior": "\u02B1", + "hieuhacirclekorean": "\u327B", + "hieuhaparenkorean": "\u321B", + "hieuhcirclekorean": "\u326D", + "hieuhkorean": "\u314E", + "hieuhparenkorean": "\u320D", + "hihiragana": "\u3072", + "hikatakana": "\u30D2", + "hikatakanahalfwidth": "\uFF8B", + "hiriq": "\u05B4", + "hiriq14": "\u05B4", + "hiriq21": "\u05B4", + "hiriq2d": "\u05B4", + "hiriqhebrew": "\u05B4", + "hiriqnarrowhebrew": "\u05B4", + "hiriqquarterhebrew": "\u05B4", + "hiriqwidehebrew": "\u05B4", + "hlinebelow": "\u1E96", + "hmonospace": "\uFF48", + "hoarmenian": "\u0570", + "hohipthai": "\u0E2B", + "hohiragana": "\u307B", + "hokatakana": "\u30DB", + "hokatakanahalfwidth": "\uFF8E", + "holam": "\u05B9", + "holam19": "\u05B9", + "holam26": "\u05B9", + "holam32": "\u05B9", + "holamhebrew": "\u05B9", + "holamnarrowhebrew": "\u05B9", + "holamquarterhebrew": "\u05B9", + "holamwidehebrew": "\u05B9", + "honokhukthai": "\u0E2E", + "hookabovecomb": "\u0309", + "hookcmb": "\u0309", + "hookpalatalizedbelowcmb": "\u0321", + "hookretroflexbelowcmb": "\u0322", + "hoonsquare": "\u3342", + "horicoptic": "\u03E9", + "horizontalbar": "\u2015", + "horncmb": "\u031B", + "hotsprings": "\u2668", + "house": "\u2302", + "hparen": "\u24A3", + "hsuperior": "\u02B0", + "hturned": "\u0265", + "huhiragana": "\u3075", + "huiitosquare": "\u3333", + "hukatakana": "\u30D5", + "hukatakanahalfwidth": "\uFF8C", + "hungarumlaut": "\u02DD", + "hungarumlautcmb": "\u030B", + "hv": "\u0195", + "hyphen": "\u002D", + "hypheninferior": "\uF6E5", + "hyphenmonospace": "\uFF0D", + "hyphensmall": "\uFE63", + "hyphensuperior": "\uF6E6", + "hyphentwo": "\u2010", + "i": "\u0069", + "iacute": "\u00ED", + "iacyrillic": "\u044F", + "ibengali": "\u0987", + "ibopomofo": "\u3127", + "ibreve": "\u012D", + "icaron": "\u01D0", + "icircle": "\u24D8", + "icircumflex": "\u00EE", + "icyrillic": "\u0456", + "idblgrave": "\u0209", + "ideographearthcircle": "\u328F", + "ideographfirecircle": "\u328B", + "ideographicallianceparen": "\u323F", + "ideographiccallparen": "\u323A", + "ideographiccentrecircle": "\u32A5", + "ideographicclose": "\u3006", + "ideographiccomma": "\u3001", + "ideographiccommaleft": "\uFF64", + "ideographiccongratulationparen": "\u3237", + "ideographiccorrectcircle": "\u32A3", + "ideographicearthparen": "\u322F", + "ideographicenterpriseparen": "\u323D", + "ideographicexcellentcircle": "\u329D", + "ideographicfestivalparen": "\u3240", + "ideographicfinancialcircle": "\u3296", + "ideographicfinancialparen": "\u3236", + "ideographicfireparen": "\u322B", + "ideographichaveparen": "\u3232", + "ideographichighcircle": "\u32A4", + "ideographiciterationmark": "\u3005", + "ideographiclaborcircle": "\u3298", + "ideographiclaborparen": "\u3238", + "ideographicleftcircle": "\u32A7", + "ideographiclowcircle": "\u32A6", + "ideographicmedicinecircle": "\u32A9", + "ideographicmetalparen": "\u322E", + "ideographicmoonparen": "\u322A", + "ideographicnameparen": "\u3234", + "ideographicperiod": "\u3002", + "ideographicprintcircle": "\u329E", + "ideographicreachparen": "\u3243", + "ideographicrepresentparen": "\u3239", + "ideographicresourceparen": "\u323E", + "ideographicrightcircle": "\u32A8", + "ideographicsecretcircle": "\u3299", + "ideographicselfparen": "\u3242", + "ideographicsocietyparen": "\u3233", + "ideographicspace": "\u3000", + "ideographicspecialparen": "\u3235", + "ideographicstockparen": "\u3231", + "ideographicstudyparen": "\u323B", + "ideographicsunparen": "\u3230", + "ideographicsuperviseparen": "\u323C", + "ideographicwaterparen": "\u322C", + "ideographicwoodparen": "\u322D", + "ideographiczero": "\u3007", + "ideographmetalcircle": "\u328E", + "ideographmooncircle": "\u328A", + "ideographnamecircle": "\u3294", + "ideographsuncircle": "\u3290", + "ideographwatercircle": "\u328C", + "ideographwoodcircle": "\u328D", + "ideva": "\u0907", + "idieresis": "\u00EF", + "idieresisacute": "\u1E2F", + "idieresiscyrillic": "\u04E5", + "idotbelow": "\u1ECB", + "iebrevecyrillic": "\u04D7", + "iecyrillic": "\u0435", + "ieungacirclekorean": "\u3275", + "ieungaparenkorean": "\u3215", + "ieungcirclekorean": "\u3267", + "ieungkorean": "\u3147", + "ieungparenkorean": "\u3207", + "igrave": "\u00EC", + "igujarati": "\u0A87", + "igurmukhi": "\u0A07", + "ihiragana": "\u3044", + "ihookabove": "\u1EC9", + "iibengali": "\u0988", + "iicyrillic": "\u0438", + "iideva": "\u0908", + "iigujarati": "\u0A88", + "iigurmukhi": "\u0A08", + "iimatragurmukhi": "\u0A40", + "iinvertedbreve": "\u020B", + "iishortcyrillic": "\u0439", + "iivowelsignbengali": "\u09C0", + "iivowelsigndeva": "\u0940", + "iivowelsigngujarati": "\u0AC0", + "ij": "\u0133", + "ikatakana": "\u30A4", + "ikatakanahalfwidth": "\uFF72", + "ikorean": "\u3163", + "ilde": "\u02DC", + "iluyhebrew": "\u05AC", + "imacron": "\u012B", + "imacroncyrillic": "\u04E3", + "imageorapproximatelyequal": "\u2253", + "imatragurmukhi": "\u0A3F", + "imonospace": "\uFF49", + "increment": "\u2206", + "infinity": "\u221E", + "iniarmenian": "\u056B", + "integral": "\u222B", + "integralbottom": "\u2321", + "integralbt": "\u2321", + "integralex": "\uF8F5", + "integraltop": "\u2320", + "integraltp": "\u2320", + "intersection": "\u2229", + "intisquare": "\u3305", + "invbullet": "\u25D8", + "invcircle": "\u25D9", + "invsmileface": "\u263B", + "iocyrillic": "\u0451", + "iogonek": "\u012F", + "iota": "\u03B9", + "iotadieresis": "\u03CA", + "iotadieresistonos": "\u0390", + "iotalatin": "\u0269", + "iotatonos": "\u03AF", + "iparen": "\u24A4", + "irigurmukhi": "\u0A72", + "ismallhiragana": "\u3043", + "ismallkatakana": "\u30A3", + "ismallkatakanahalfwidth": "\uFF68", + "issharbengali": "\u09FA", + "istroke": "\u0268", + "isuperior": "\uF6ED", + "iterationhiragana": "\u309D", + "iterationkatakana": "\u30FD", + "itilde": "\u0129", + "itildebelow": "\u1E2D", + "iubopomofo": "\u3129", + "iucyrillic": "\u044E", + "ivowelsignbengali": "\u09BF", + "ivowelsigndeva": "\u093F", + "ivowelsigngujarati": "\u0ABF", + "izhitsacyrillic": "\u0475", + "izhitsadblgravecyrillic": "\u0477", + "j": "\u006A", + "jaarmenian": "\u0571", + "jabengali": "\u099C", + "jadeva": "\u091C", + "jagujarati": "\u0A9C", + "jagurmukhi": "\u0A1C", + "jbopomofo": "\u3110", + "jcaron": "\u01F0", + "jcircle": "\u24D9", + "jcircumflex": "\u0135", + "jcrossedtail": "\u029D", + "jdotlessstroke": "\u025F", + "jecyrillic": "\u0458", + "jeemarabic": "\u062C", + "jeemfinalarabic": "\uFE9E", + "jeeminitialarabic": "\uFE9F", + "jeemmedialarabic": "\uFEA0", + "jeharabic": "\u0698", + "jehfinalarabic": "\uFB8B", + "jhabengali": "\u099D", + "jhadeva": "\u091D", + "jhagujarati": "\u0A9D", + "jhagurmukhi": "\u0A1D", + "jheharmenian": "\u057B", + "jis": "\u3004", + "jmonospace": "\uFF4A", + "jparen": "\u24A5", + "jsuperior": "\u02B2", + "k": "\u006B", + "kabashkircyrillic": "\u04A1", + "kabengali": "\u0995", + "kacute": "\u1E31", + "kacyrillic": "\u043A", + "kadescendercyrillic": "\u049B", + "kadeva": "\u0915", + "kaf": "\u05DB", + "kafarabic": "\u0643", + "kafdagesh": "\uFB3B", + "kafdageshhebrew": "\uFB3B", + "kaffinalarabic": "\uFEDA", + "kafhebrew": "\u05DB", + "kafinitialarabic": "\uFEDB", + "kafmedialarabic": "\uFEDC", + "kafrafehebrew": "\uFB4D", + "kagujarati": "\u0A95", + "kagurmukhi": "\u0A15", + "kahiragana": "\u304B", + "kahookcyrillic": "\u04C4", + "kakatakana": "\u30AB", + "kakatakanahalfwidth": "\uFF76", + "kappa": "\u03BA", + "kappasymbolgreek": "\u03F0", + "kapyeounmieumkorean": "\u3171", + "kapyeounphieuphkorean": "\u3184", + "kapyeounpieupkorean": "\u3178", + "kapyeounssangpieupkorean": "\u3179", + "karoriisquare": "\u330D", + "kashidaautoarabic": "\u0640", + "kashidaautonosidebearingarabic": "\u0640", + "kasmallkatakana": "\u30F5", + "kasquare": "\u3384", + "kasraarabic": "\u0650", + "kasratanarabic": "\u064D", + "kastrokecyrillic": "\u049F", + "katahiraprolongmarkhalfwidth": "\uFF70", + "kaverticalstrokecyrillic": "\u049D", + "kbopomofo": "\u310E", + "kcalsquare": "\u3389", + "kcaron": "\u01E9", + "kcedilla": "\u0137", + "kcircle": "\u24DA", + "kcommaaccent": "\u0137", + "kdotbelow": "\u1E33", + "keharmenian": "\u0584", + "kehiragana": "\u3051", + "kekatakana": "\u30B1", + "kekatakanahalfwidth": "\uFF79", + "kenarmenian": "\u056F", + "kesmallkatakana": "\u30F6", + "kgreenlandic": "\u0138", + "khabengali": "\u0996", + "khacyrillic": "\u0445", + "khadeva": "\u0916", + "khagujarati": "\u0A96", + "khagurmukhi": "\u0A16", + "khaharabic": "\u062E", + "khahfinalarabic": "\uFEA6", + "khahinitialarabic": "\uFEA7", + "khahmedialarabic": "\uFEA8", + "kheicoptic": "\u03E7", + "khhadeva": "\u0959", + "khhagurmukhi": "\u0A59", + "khieukhacirclekorean": "\u3278", + "khieukhaparenkorean": "\u3218", + "khieukhcirclekorean": "\u326A", + "khieukhkorean": "\u314B", + "khieukhparenkorean": "\u320A", + "khokhaithai": "\u0E02", + "khokhonthai": "\u0E05", + "khokhuatthai": "\u0E03", + "khokhwaithai": "\u0E04", + "khomutthai": "\u0E5B", + "khook": "\u0199", + "khorakhangthai": "\u0E06", + "khzsquare": "\u3391", + "kihiragana": "\u304D", + "kikatakana": "\u30AD", + "kikatakanahalfwidth": "\uFF77", + "kiroguramusquare": "\u3315", + "kiromeetorusquare": "\u3316", + "kirosquare": "\u3314", + "kiyeokacirclekorean": "\u326E", + "kiyeokaparenkorean": "\u320E", + "kiyeokcirclekorean": "\u3260", + "kiyeokkorean": "\u3131", + "kiyeokparenkorean": "\u3200", + "kiyeoksioskorean": "\u3133", + "kjecyrillic": "\u045C", + "klinebelow": "\u1E35", + "klsquare": "\u3398", + "kmcubedsquare": "\u33A6", + "kmonospace": "\uFF4B", + "kmsquaredsquare": "\u33A2", + "kohiragana": "\u3053", + "kohmsquare": "\u33C0", + "kokaithai": "\u0E01", + "kokatakana": "\u30B3", + "kokatakanahalfwidth": "\uFF7A", + "kooposquare": "\u331E", + "koppacyrillic": "\u0481", + "koreanstandardsymbol": "\u327F", + "koroniscmb": "\u0343", + "kparen": "\u24A6", + "kpasquare": "\u33AA", + "ksicyrillic": "\u046F", + "ktsquare": "\u33CF", + "kturned": "\u029E", + "kuhiragana": "\u304F", + "kukatakana": "\u30AF", + "kukatakanahalfwidth": "\uFF78", + "kvsquare": "\u33B8", + "kwsquare": "\u33BE", + "l": "\u006C", + "labengali": "\u09B2", + "lacute": "\u013A", + "ladeva": "\u0932", + "lagujarati": "\u0AB2", + "lagurmukhi": "\u0A32", + "lakkhangyaothai": "\u0E45", + "lamaleffinalarabic": "\uFEFC", + "lamalefhamzaabovefinalarabic": "\uFEF8", + "lamalefhamzaaboveisolatedarabic": "\uFEF7", + "lamalefhamzabelowfinalarabic": "\uFEFA", + "lamalefhamzabelowisolatedarabic": "\uFEF9", + "lamalefisolatedarabic": "\uFEFB", + "lamalefmaddaabovefinalarabic": "\uFEF6", + "lamalefmaddaaboveisolatedarabic": "\uFEF5", + "lamarabic": "\u0644", + "lambda": "\u03BB", + "lambdastroke": "\u019B", + "lamed": "\u05DC", + "lameddagesh": "\uFB3C", + "lameddageshhebrew": "\uFB3C", + "lamedhebrew": "\u05DC", + "lamedholam": "\u05DC\u05B9", + "lamedholamdagesh": "\u05DC\u05B9\u05BC", + "lamedholamdageshhebrew": "\u05DC\u05B9\u05BC", + "lamedholamhebrew": "\u05DC\u05B9", + "lamfinalarabic": "\uFEDE", + "lamhahinitialarabic": "\uFCCA", + "laminitialarabic": "\uFEDF", + "lamjeeminitialarabic": "\uFCC9", + "lamkhahinitialarabic": "\uFCCB", + "lamlamhehisolatedarabic": "\uFDF2", + "lammedialarabic": "\uFEE0", + "lammeemhahinitialarabic": "\uFD88", + "lammeeminitialarabic": "\uFCCC", + "lammeemjeeminitialarabic": "\uFEDF\uFEE4\uFEA0", + "lammeemkhahinitialarabic": "\uFEDF\uFEE4\uFEA8", + "largecircle": "\u25EF", + "lbar": "\u019A", + "lbelt": "\u026C", + "lbopomofo": "\u310C", + "lcaron": "\u013E", + "lcedilla": "\u013C", + "lcircle": "\u24DB", + "lcircumflexbelow": "\u1E3D", + "lcommaaccent": "\u013C", + "ldot": "\u0140", + "ldotaccent": "\u0140", + "ldotbelow": "\u1E37", + "ldotbelowmacron": "\u1E39", + "leftangleabovecmb": "\u031A", + "lefttackbelowcmb": "\u0318", + "less": "\u003C", + "lessequal": "\u2264", + "lessequalorgreater": "\u22DA", + "lessmonospace": "\uFF1C", + "lessorequivalent": "\u2272", + "lessorgreater": "\u2276", + "lessoverequal": "\u2266", + "lesssmall": "\uFE64", + "lezh": "\u026E", + "lfblock": "\u258C", + "lhookretroflex": "\u026D", + "lira": "\u20A4", + "liwnarmenian": "\u056C", + "lj": "\u01C9", + "ljecyrillic": "\u0459", + "ll": "\uF6C0", + "lladeva": "\u0933", + "llagujarati": "\u0AB3", + "llinebelow": "\u1E3B", + "llladeva": "\u0934", + "llvocalicbengali": "\u09E1", + "llvocalicdeva": "\u0961", + "llvocalicvowelsignbengali": "\u09E3", + "llvocalicvowelsigndeva": "\u0963", + "lmiddletilde": "\u026B", + "lmonospace": "\uFF4C", + "lmsquare": "\u33D0", + "lochulathai": "\u0E2C", + "logicaland": "\u2227", + "logicalnot": "\u00AC", + "logicalnotreversed": "\u2310", + "logicalor": "\u2228", + "lolingthai": "\u0E25", + "longs": "\u017F", + "lowlinecenterline": "\uFE4E", + "lowlinecmb": "\u0332", + "lowlinedashed": "\uFE4D", + "lozenge": "\u25CA", + "lparen": "\u24A7", + "lslash": "\u0142", + "lsquare": "\u2113", + "lsuperior": "\uF6EE", + "ltshade": "\u2591", + "luthai": "\u0E26", + "lvocalicbengali": "\u098C", + "lvocalicdeva": "\u090C", + "lvocalicvowelsignbengali": "\u09E2", + "lvocalicvowelsigndeva": "\u0962", + "lxsquare": "\u33D3", + "m": "\u006D", + "mabengali": "\u09AE", + "macron": "\u00AF", + "macronbelowcmb": "\u0331", + "macroncmb": "\u0304", + "macronlowmod": "\u02CD", + "macronmonospace": "\uFFE3", + "macute": "\u1E3F", + "madeva": "\u092E", + "magujarati": "\u0AAE", + "magurmukhi": "\u0A2E", + "mahapakhhebrew": "\u05A4", + "mahapakhlefthebrew": "\u05A4", + "mahiragana": "\u307E", + "maichattawalowleftthai": "\uF895", + "maichattawalowrightthai": "\uF894", + "maichattawathai": "\u0E4B", + "maichattawaupperleftthai": "\uF893", + "maieklowleftthai": "\uF88C", + "maieklowrightthai": "\uF88B", + "maiekthai": "\u0E48", + "maiekupperleftthai": "\uF88A", + "maihanakatleftthai": "\uF884", + "maihanakatthai": "\u0E31", + "maitaikhuleftthai": "\uF889", + "maitaikhuthai": "\u0E47", + "maitholowleftthai": "\uF88F", + "maitholowrightthai": "\uF88E", + "maithothai": "\u0E49", + "maithoupperleftthai": "\uF88D", + "maitrilowleftthai": "\uF892", + "maitrilowrightthai": "\uF891", + "maitrithai": "\u0E4A", + "maitriupperleftthai": "\uF890", + "maiyamokthai": "\u0E46", + "makatakana": "\u30DE", + "makatakanahalfwidth": "\uFF8F", + "male": "\u2642", + "mansyonsquare": "\u3347", + "maqafhebrew": "\u05BE", + "mars": "\u2642", + "masoracirclehebrew": "\u05AF", + "masquare": "\u3383", + "mbopomofo": "\u3107", + "mbsquare": "\u33D4", + "mcircle": "\u24DC", + "mcubedsquare": "\u33A5", + "mdotaccent": "\u1E41", + "mdotbelow": "\u1E43", + "meemarabic": "\u0645", + "meemfinalarabic": "\uFEE2", + "meeminitialarabic": "\uFEE3", + "meemmedialarabic": "\uFEE4", + "meemmeeminitialarabic": "\uFCD1", + "meemmeemisolatedarabic": "\uFC48", + "meetorusquare": "\u334D", + "mehiragana": "\u3081", + "meizierasquare": "\u337E", + "mekatakana": "\u30E1", + "mekatakanahalfwidth": "\uFF92", + "mem": "\u05DE", + "memdagesh": "\uFB3E", + "memdageshhebrew": "\uFB3E", + "memhebrew": "\u05DE", + "menarmenian": "\u0574", + "merkhahebrew": "\u05A5", + "merkhakefulahebrew": "\u05A6", + "merkhakefulalefthebrew": "\u05A6", + "merkhalefthebrew": "\u05A5", + "mhook": "\u0271", + "mhzsquare": "\u3392", + "middledotkatakanahalfwidth": "\uFF65", + "middot": "\u00B7", + "mieumacirclekorean": "\u3272", + "mieumaparenkorean": "\u3212", + "mieumcirclekorean": "\u3264", + "mieumkorean": "\u3141", + "mieumpansioskorean": "\u3170", + "mieumparenkorean": "\u3204", + "mieumpieupkorean": "\u316E", + "mieumsioskorean": "\u316F", + "mihiragana": "\u307F", + "mikatakana": "\u30DF", + "mikatakanahalfwidth": "\uFF90", + "minus": "\u2212", + "minusbelowcmb": "\u0320", + "minuscircle": "\u2296", + "minusmod": "\u02D7", + "minusplus": "\u2213", + "minute": "\u2032", + "miribaarusquare": "\u334A", + "mirisquare": "\u3349", + "mlonglegturned": "\u0270", + "mlsquare": "\u3396", + "mmcubedsquare": "\u33A3", + "mmonospace": "\uFF4D", + "mmsquaredsquare": "\u339F", + "mohiragana": "\u3082", + "mohmsquare": "\u33C1", + "mokatakana": "\u30E2", + "mokatakanahalfwidth": "\uFF93", + "molsquare": "\u33D6", + "momathai": "\u0E21", + "moverssquare": "\u33A7", + "moverssquaredsquare": "\u33A8", + "mparen": "\u24A8", + "mpasquare": "\u33AB", + "mssquare": "\u33B3", + "msuperior": "\uF6EF", + "mturned": "\u026F", + "mu": "\u00B5", + "mu1": "\u00B5", + "muasquare": "\u3382", + "muchgreater": "\u226B", + "muchless": "\u226A", + "mufsquare": "\u338C", + "mugreek": "\u03BC", + "mugsquare": "\u338D", + "muhiragana": "\u3080", + "mukatakana": "\u30E0", + "mukatakanahalfwidth": "\uFF91", + "mulsquare": "\u3395", + "multiply": "\u00D7", + "mumsquare": "\u339B", + "munahhebrew": "\u05A3", + "munahlefthebrew": "\u05A3", + "musicalnote": "\u266A", + "musicalnotedbl": "\u266B", + "musicflatsign": "\u266D", + "musicsharpsign": "\u266F", + "mussquare": "\u33B2", + "muvsquare": "\u33B6", + "muwsquare": "\u33BC", + "mvmegasquare": "\u33B9", + "mvsquare": "\u33B7", + "mwmegasquare": "\u33BF", + "mwsquare": "\u33BD", + "n": "\u006E", + "nabengali": "\u09A8", + "nabla": "\u2207", + "nacute": "\u0144", + "nadeva": "\u0928", + "nagujarati": "\u0AA8", + "nagurmukhi": "\u0A28", + "nahiragana": "\u306A", + "nakatakana": "\u30CA", + "nakatakanahalfwidth": "\uFF85", + "napostrophe": "\u0149", + "nasquare": "\u3381", + "nbopomofo": "\u310B", + "nbspace": "\u00A0", + "ncaron": "\u0148", + "ncedilla": "\u0146", + "ncircle": "\u24DD", + "ncircumflexbelow": "\u1E4B", + "ncommaaccent": "\u0146", + "ndotaccent": "\u1E45", + "ndotbelow": "\u1E47", + "nehiragana": "\u306D", + "nekatakana": "\u30CD", + "nekatakanahalfwidth": "\uFF88", + "newsheqelsign": "\u20AA", + "nfsquare": "\u338B", + "ngabengali": "\u0999", + "ngadeva": "\u0919", + "ngagujarati": "\u0A99", + "ngagurmukhi": "\u0A19", + "ngonguthai": "\u0E07", + "nhiragana": "\u3093", + "nhookleft": "\u0272", + "nhookretroflex": "\u0273", + "nieunacirclekorean": "\u326F", + "nieunaparenkorean": "\u320F", + "nieuncieuckorean": "\u3135", + "nieuncirclekorean": "\u3261", + "nieunhieuhkorean": "\u3136", + "nieunkorean": "\u3134", + "nieunpansioskorean": "\u3168", + "nieunparenkorean": "\u3201", + "nieunsioskorean": "\u3167", + "nieuntikeutkorean": "\u3166", + "nihiragana": "\u306B", + "nikatakana": "\u30CB", + "nikatakanahalfwidth": "\uFF86", + "nikhahitleftthai": "\uF899", + "nikhahitthai": "\u0E4D", + "nine": "\u0039", + "ninearabic": "\u0669", + "ninebengali": "\u09EF", + "ninecircle": "\u2468", + "ninecircleinversesansserif": "\u2792", + "ninedeva": "\u096F", + "ninegujarati": "\u0AEF", + "ninegurmukhi": "\u0A6F", + "ninehackarabic": "\u0669", + "ninehangzhou": "\u3029", + "nineideographicparen": "\u3228", + "nineinferior": "\u2089", + "ninemonospace": "\uFF19", + "nineoldstyle": "\uF739", + "nineparen": "\u247C", + "nineperiod": "\u2490", + "ninepersian": "\u06F9", + "nineroman": "\u2178", + "ninesuperior": "\u2079", + "nineteencircle": "\u2472", + "nineteenparen": "\u2486", + "nineteenperiod": "\u249A", + "ninethai": "\u0E59", + "nj": "\u01CC", + "njecyrillic": "\u045A", + "nkatakana": "\u30F3", + "nkatakanahalfwidth": "\uFF9D", + "nlegrightlong": "\u019E", + "nlinebelow": "\u1E49", + "nmonospace": "\uFF4E", + "nmsquare": "\u339A", + "nnabengali": "\u09A3", + "nnadeva": "\u0923", + "nnagujarati": "\u0AA3", + "nnagurmukhi": "\u0A23", + "nnnadeva": "\u0929", + "nohiragana": "\u306E", + "nokatakana": "\u30CE", + "nokatakanahalfwidth": "\uFF89", + "nonbreakingspace": "\u00A0", + "nonenthai": "\u0E13", + "nonuthai": "\u0E19", + "noonarabic": "\u0646", + "noonfinalarabic": "\uFEE6", + "noonghunnaarabic": "\u06BA", + "noonghunnafinalarabic": "\uFB9F", + "noonhehinitialarabic": "\uFEE7\uFEEC", + "nooninitialarabic": "\uFEE7", + "noonjeeminitialarabic": "\uFCD2", + "noonjeemisolatedarabic": "\uFC4B", + "noonmedialarabic": "\uFEE8", + "noonmeeminitialarabic": "\uFCD5", + "noonmeemisolatedarabic": "\uFC4E", + "noonnoonfinalarabic": "\uFC8D", + "notcontains": "\u220C", + "notelement": "\u2209", + "notelementof": "\u2209", + "notequal": "\u2260", + "notgreater": "\u226F", + "notgreaternorequal": "\u2271", + "notgreaternorless": "\u2279", + "notidentical": "\u2262", + "notless": "\u226E", + "notlessnorequal": "\u2270", + "notparallel": "\u2226", + "notprecedes": "\u2280", + "notsubset": "\u2284", + "notsucceeds": "\u2281", + "notsuperset": "\u2285", + "nowarmenian": "\u0576", + "nparen": "\u24A9", + "nssquare": "\u33B1", + "nsuperior": "\u207F", + "ntilde": "\u00F1", + "nu": "\u03BD", + "nuhiragana": "\u306C", + "nukatakana": "\u30CC", + "nukatakanahalfwidth": "\uFF87", + "nuktabengali": "\u09BC", + "nuktadeva": "\u093C", + "nuktagujarati": "\u0ABC", + "nuktagurmukhi": "\u0A3C", + "numbersign": "\u0023", + "numbersignmonospace": "\uFF03", + "numbersignsmall": "\uFE5F", + "numeralsigngreek": "\u0374", + "numeralsignlowergreek": "\u0375", + "numero": "\u2116", + "nun": "\u05E0", + "nundagesh": "\uFB40", + "nundageshhebrew": "\uFB40", + "nunhebrew": "\u05E0", + "nvsquare": "\u33B5", + "nwsquare": "\u33BB", + "nyabengali": "\u099E", + "nyadeva": "\u091E", + "nyagujarati": "\u0A9E", + "nyagurmukhi": "\u0A1E", + "o": "\u006F", + "oacute": "\u00F3", + "oangthai": "\u0E2D", + "obarred": "\u0275", + "obarredcyrillic": "\u04E9", + "obarreddieresiscyrillic": "\u04EB", + "obengali": "\u0993", + "obopomofo": "\u311B", + "obreve": "\u014F", + "ocandradeva": "\u0911", + "ocandragujarati": "\u0A91", + "ocandravowelsigndeva": "\u0949", + "ocandravowelsigngujarati": "\u0AC9", + "ocaron": "\u01D2", + "ocircle": "\u24DE", + "ocircumflex": "\u00F4", + "ocircumflexacute": "\u1ED1", + "ocircumflexdotbelow": "\u1ED9", + "ocircumflexgrave": "\u1ED3", + "ocircumflexhookabove": "\u1ED5", + "ocircumflextilde": "\u1ED7", + "ocyrillic": "\u043E", + "odblacute": "\u0151", + "odblgrave": "\u020D", + "odeva": "\u0913", + "odieresis": "\u00F6", + "odieresiscyrillic": "\u04E7", + "odotbelow": "\u1ECD", + "oe": "\u0153", + "oekorean": "\u315A", + "ogonek": "\u02DB", + "ogonekcmb": "\u0328", + "ograve": "\u00F2", + "ogujarati": "\u0A93", + "oharmenian": "\u0585", + "ohiragana": "\u304A", + "ohookabove": "\u1ECF", + "ohorn": "\u01A1", + "ohornacute": "\u1EDB", + "ohorndotbelow": "\u1EE3", + "ohorngrave": "\u1EDD", + "ohornhookabove": "\u1EDF", + "ohorntilde": "\u1EE1", + "ohungarumlaut": "\u0151", + "oi": "\u01A3", + "oinvertedbreve": "\u020F", + "okatakana": "\u30AA", + "okatakanahalfwidth": "\uFF75", + "okorean": "\u3157", + "olehebrew": "\u05AB", + "omacron": "\u014D", + "omacronacute": "\u1E53", + "omacrongrave": "\u1E51", + "omdeva": "\u0950", + "omega": "\u03C9", + "omega1": "\u03D6", + "omegacyrillic": "\u0461", + "omegalatinclosed": "\u0277", + "omegaroundcyrillic": "\u047B", + "omegatitlocyrillic": "\u047D", + "omegatonos": "\u03CE", + "omgujarati": "\u0AD0", + "omicron": "\u03BF", + "omicrontonos": "\u03CC", + "omonospace": "\uFF4F", + "one": "\u0031", + "onearabic": "\u0661", + "onebengali": "\u09E7", + "onecircle": "\u2460", + "onecircleinversesansserif": "\u278A", + "onedeva": "\u0967", + "onedotenleader": "\u2024", + "oneeighth": "\u215B", + "onefitted": "\uF6DC", + "onegujarati": "\u0AE7", + "onegurmukhi": "\u0A67", + "onehackarabic": "\u0661", + "onehalf": "\u00BD", + "onehangzhou": "\u3021", + "oneideographicparen": "\u3220", + "oneinferior": "\u2081", + "onemonospace": "\uFF11", + "onenumeratorbengali": "\u09F4", + "oneoldstyle": "\uF731", + "oneparen": "\u2474", + "oneperiod": "\u2488", + "onepersian": "\u06F1", + "onequarter": "\u00BC", + "oneroman": "\u2170", + "onesuperior": "\u00B9", + "onethai": "\u0E51", + "onethird": "\u2153", + "oogonek": "\u01EB", + "oogonekmacron": "\u01ED", + "oogurmukhi": "\u0A13", + "oomatragurmukhi": "\u0A4B", + "oopen": "\u0254", + "oparen": "\u24AA", + "openbullet": "\u25E6", + "option": "\u2325", + "ordfeminine": "\u00AA", + "ordmasculine": "\u00BA", + "orthogonal": "\u221F", + "oshortdeva": "\u0912", + "oshortvowelsigndeva": "\u094A", + "oslash": "\u00F8", + "oslashacute": "\u01FF", + "osmallhiragana": "\u3049", + "osmallkatakana": "\u30A9", + "osmallkatakanahalfwidth": "\uFF6B", + "ostrokeacute": "\u01FF", + "osuperior": "\uF6F0", + "otcyrillic": "\u047F", + "otilde": "\u00F5", + "otildeacute": "\u1E4D", + "otildedieresis": "\u1E4F", + "oubopomofo": "\u3121", + "overline": "\u203E", + "overlinecenterline": "\uFE4A", + "overlinecmb": "\u0305", + "overlinedashed": "\uFE49", + "overlinedblwavy": "\uFE4C", + "overlinewavy": "\uFE4B", + "overscore": "\u00AF", + "ovowelsignbengali": "\u09CB", + "ovowelsigndeva": "\u094B", + "ovowelsigngujarati": "\u0ACB", + "p": "\u0070", + "paampssquare": "\u3380", + "paasentosquare": "\u332B", + "pabengali": "\u09AA", + "pacute": "\u1E55", + "padeva": "\u092A", + "pagedown": "\u21DF", + "pageup": "\u21DE", + "pagujarati": "\u0AAA", + "pagurmukhi": "\u0A2A", + "pahiragana": "\u3071", + "paiyannoithai": "\u0E2F", + "pakatakana": "\u30D1", + "palatalizationcyrilliccmb": "\u0484", + "palochkacyrillic": "\u04C0", + "pansioskorean": "\u317F", + "paragraph": "\u00B6", + "parallel": "\u2225", + "parenleft": "\u0028", + "parenleftaltonearabic": "\uFD3E", + "parenleftbt": "\uF8ED", + "parenleftex": "\uF8EC", + "parenleftinferior": "\u208D", + "parenleftmonospace": "\uFF08", + "parenleftsmall": "\uFE59", + "parenleftsuperior": "\u207D", + "parenlefttp": "\uF8EB", + "parenleftvertical": "\uFE35", + "parenright": "\u0029", + "parenrightaltonearabic": "\uFD3F", + "parenrightbt": "\uF8F8", + "parenrightex": "\uF8F7", + "parenrightinferior": "\u208E", + "parenrightmonospace": "\uFF09", + "parenrightsmall": "\uFE5A", + "parenrightsuperior": "\u207E", + "parenrighttp": "\uF8F6", + "parenrightvertical": "\uFE36", + "partialdiff": "\u2202", + "paseqhebrew": "\u05C0", + "pashtahebrew": "\u0599", + "pasquare": "\u33A9", + "patah": "\u05B7", + "patah11": "\u05B7", + "patah1d": "\u05B7", + "patah2a": "\u05B7", + "patahhebrew": "\u05B7", + "patahnarrowhebrew": "\u05B7", + "patahquarterhebrew": "\u05B7", + "patahwidehebrew": "\u05B7", + "pazerhebrew": "\u05A1", + "pbopomofo": "\u3106", + "pcircle": "\u24DF", + "pdotaccent": "\u1E57", + "pe": "\u05E4", + "pecyrillic": "\u043F", + "pedagesh": "\uFB44", + "pedageshhebrew": "\uFB44", + "peezisquare": "\u333B", + "pefinaldageshhebrew": "\uFB43", + "peharabic": "\u067E", + "peharmenian": "\u057A", + "pehebrew": "\u05E4", + "pehfinalarabic": "\uFB57", + "pehinitialarabic": "\uFB58", + "pehiragana": "\u307A", + "pehmedialarabic": "\uFB59", + "pekatakana": "\u30DA", + "pemiddlehookcyrillic": "\u04A7", + "perafehebrew": "\uFB4E", + "percent": "\u0025", + "percentarabic": "\u066A", + "percentmonospace": "\uFF05", + "percentsmall": "\uFE6A", + "period": "\u002E", + "periodarmenian": "\u0589", + "periodcentered": "\u00B7", + "periodhalfwidth": "\uFF61", + "periodinferior": "\uF6E7", + "periodmonospace": "\uFF0E", + "periodsmall": "\uFE52", + "periodsuperior": "\uF6E8", + "perispomenigreekcmb": "\u0342", + "perpendicular": "\u22A5", + "perthousand": "\u2030", + "peseta": "\u20A7", + "pfsquare": "\u338A", + "phabengali": "\u09AB", + "phadeva": "\u092B", + "phagujarati": "\u0AAB", + "phagurmukhi": "\u0A2B", + "phi": "\u03C6", + "phi1": "\u03D5", + "phieuphacirclekorean": "\u327A", + "phieuphaparenkorean": "\u321A", + "phieuphcirclekorean": "\u326C", + "phieuphkorean": "\u314D", + "phieuphparenkorean": "\u320C", + "philatin": "\u0278", + "phinthuthai": "\u0E3A", + "phisymbolgreek": "\u03D5", + "phook": "\u01A5", + "phophanthai": "\u0E1E", + "phophungthai": "\u0E1C", + "phosamphaothai": "\u0E20", + "pi": "\u03C0", + "pieupacirclekorean": "\u3273", + "pieupaparenkorean": "\u3213", + "pieupcieuckorean": "\u3176", + "pieupcirclekorean": "\u3265", + "pieupkiyeokkorean": "\u3172", + "pieupkorean": "\u3142", + "pieupparenkorean": "\u3205", + "pieupsioskiyeokkorean": "\u3174", + "pieupsioskorean": "\u3144", + "pieupsiostikeutkorean": "\u3175", + "pieupthieuthkorean": "\u3177", + "pieuptikeutkorean": "\u3173", + "pihiragana": "\u3074", + "pikatakana": "\u30D4", + "pisymbolgreek": "\u03D6", + "piwrarmenian": "\u0583", + "plus": "\u002B", + "plusbelowcmb": "\u031F", + "pluscircle": "\u2295", + "plusminus": "\u00B1", + "plusmod": "\u02D6", + "plusmonospace": "\uFF0B", + "plussmall": "\uFE62", + "plussuperior": "\u207A", + "pmonospace": "\uFF50", + "pmsquare": "\u33D8", + "pohiragana": "\u307D", + "pointingindexdownwhite": "\u261F", + "pointingindexleftwhite": "\u261C", + "pointingindexrightwhite": "\u261E", + "pointingindexupwhite": "\u261D", + "pokatakana": "\u30DD", + "poplathai": "\u0E1B", + "postalmark": "\u3012", + "postalmarkface": "\u3020", + "pparen": "\u24AB", + "precedes": "\u227A", + "prescription": "\u211E", + "primemod": "\u02B9", + "primereversed": "\u2035", + "product": "\u220F", + "projective": "\u2305", + "prolongedkana": "\u30FC", + "propellor": "\u2318", + "propersubset": "\u2282", + "propersuperset": "\u2283", + "proportion": "\u2237", + "proportional": "\u221D", + "psi": "\u03C8", + "psicyrillic": "\u0471", + "psilipneumatacyrilliccmb": "\u0486", + "pssquare": "\u33B0", + "puhiragana": "\u3077", + "pukatakana": "\u30D7", + "pvsquare": "\u33B4", + "pwsquare": "\u33BA", + "q": "\u0071", + "qadeva": "\u0958", + "qadmahebrew": "\u05A8", + "qafarabic": "\u0642", + "qaffinalarabic": "\uFED6", + "qafinitialarabic": "\uFED7", + "qafmedialarabic": "\uFED8", + "qamats": "\u05B8", + "qamats10": "\u05B8", + "qamats1a": "\u05B8", + "qamats1c": "\u05B8", + "qamats27": "\u05B8", + "qamats29": "\u05B8", + "qamats33": "\u05B8", + "qamatsde": "\u05B8", + "qamatshebrew": "\u05B8", + "qamatsnarrowhebrew": "\u05B8", + "qamatsqatanhebrew": "\u05B8", + "qamatsqatannarrowhebrew": "\u05B8", + "qamatsqatanquarterhebrew": "\u05B8", + "qamatsqatanwidehebrew": "\u05B8", + "qamatsquarterhebrew": "\u05B8", + "qamatswidehebrew": "\u05B8", + "qarneyparahebrew": "\u059F", + "qbopomofo": "\u3111", + "qcircle": "\u24E0", + "qhook": "\u02A0", + "qmonospace": "\uFF51", + "qof": "\u05E7", + "qofdagesh": "\uFB47", + "qofdageshhebrew": "\uFB47", + "qofhatafpatah": "\u05E7\u05B2", + "qofhatafpatahhebrew": "\u05E7\u05B2", + "qofhatafsegol": "\u05E7\u05B1", + "qofhatafsegolhebrew": "\u05E7\u05B1", + "qofhebrew": "\u05E7", + "qofhiriq": "\u05E7\u05B4", + "qofhiriqhebrew": "\u05E7\u05B4", + "qofholam": "\u05E7\u05B9", + "qofholamhebrew": "\u05E7\u05B9", + "qofpatah": "\u05E7\u05B7", + "qofpatahhebrew": "\u05E7\u05B7", + "qofqamats": "\u05E7\u05B8", + "qofqamatshebrew": "\u05E7\u05B8", + "qofqubuts": "\u05E7\u05BB", + "qofqubutshebrew": "\u05E7\u05BB", + "qofsegol": "\u05E7\u05B6", + "qofsegolhebrew": "\u05E7\u05B6", + "qofsheva": "\u05E7\u05B0", + "qofshevahebrew": "\u05E7\u05B0", + "qoftsere": "\u05E7\u05B5", + "qoftserehebrew": "\u05E7\u05B5", + "qparen": "\u24AC", + "quarternote": "\u2669", + "qubuts": "\u05BB", + "qubuts18": "\u05BB", + "qubuts25": "\u05BB", + "qubuts31": "\u05BB", + "qubutshebrew": "\u05BB", + "qubutsnarrowhebrew": "\u05BB", + "qubutsquarterhebrew": "\u05BB", + "qubutswidehebrew": "\u05BB", + "question": "\u003F", + "questionarabic": "\u061F", + "questionarmenian": "\u055E", + "questiondown": "\u00BF", + "questiondownsmall": "\uF7BF", + "questiongreek": "\u037E", + "questionmonospace": "\uFF1F", + "questionsmall": "\uF73F", + "quotedbl": "\u0022", + "quotedblbase": "\u201E", + "quotedblleft": "\u201C", + "quotedblmonospace": "\uFF02", + "quotedblprime": "\u301E", + "quotedblprimereversed": "\u301D", + "quotedblright": "\u201D", + "quoteleft": "\u2018", + "quoteleftreversed": "\u201B", + "quotereversed": "\u201B", + "quoteright": "\u2019", + "quoterightn": "\u0149", + "quotesinglbase": "\u201A", + "quotesingle": "\u0027", + "quotesinglemonospace": "\uFF07", + "r": "\u0072", + "raarmenian": "\u057C", + "rabengali": "\u09B0", + "racute": "\u0155", + "radeva": "\u0930", + "radical": "\u221A", + "radicalex": "\uF8E5", + "radoverssquare": "\u33AE", + "radoverssquaredsquare": "\u33AF", + "radsquare": "\u33AD", + "rafe": "\u05BF", + "rafehebrew": "\u05BF", + "ragujarati": "\u0AB0", + "ragurmukhi": "\u0A30", + "rahiragana": "\u3089", + "rakatakana": "\u30E9", + "rakatakanahalfwidth": "\uFF97", + "ralowerdiagonalbengali": "\u09F1", + "ramiddlediagonalbengali": "\u09F0", + "ramshorn": "\u0264", + "ratio": "\u2236", + "rbopomofo": "\u3116", + "rcaron": "\u0159", + "rcedilla": "\u0157", + "rcircle": "\u24E1", + "rcommaaccent": "\u0157", + "rdblgrave": "\u0211", + "rdotaccent": "\u1E59", + "rdotbelow": "\u1E5B", + "rdotbelowmacron": "\u1E5D", + "referencemark": "\u203B", + "reflexsubset": "\u2286", + "reflexsuperset": "\u2287", + "registered": "\u00AE", + "registersans": "\uF8E8", + "registerserif": "\uF6DA", + "reharabic": "\u0631", + "reharmenian": "\u0580", + "rehfinalarabic": "\uFEAE", + "rehiragana": "\u308C", + "rehyehaleflamarabic": "\u0631\uFEF3\uFE8E\u0644", + "rekatakana": "\u30EC", + "rekatakanahalfwidth": "\uFF9A", + "resh": "\u05E8", + "reshdageshhebrew": "\uFB48", + "reshhatafpatah": "\u05E8\u05B2", + "reshhatafpatahhebrew": "\u05E8\u05B2", + "reshhatafsegol": "\u05E8\u05B1", + "reshhatafsegolhebrew": "\u05E8\u05B1", + "reshhebrew": "\u05E8", + "reshhiriq": "\u05E8\u05B4", + "reshhiriqhebrew": "\u05E8\u05B4", + "reshholam": "\u05E8\u05B9", + "reshholamhebrew": "\u05E8\u05B9", + "reshpatah": "\u05E8\u05B7", + "reshpatahhebrew": "\u05E8\u05B7", + "reshqamats": "\u05E8\u05B8", + "reshqamatshebrew": "\u05E8\u05B8", + "reshqubuts": "\u05E8\u05BB", + "reshqubutshebrew": "\u05E8\u05BB", + "reshsegol": "\u05E8\u05B6", + "reshsegolhebrew": "\u05E8\u05B6", + "reshsheva": "\u05E8\u05B0", + "reshshevahebrew": "\u05E8\u05B0", + "reshtsere": "\u05E8\u05B5", + "reshtserehebrew": "\u05E8\u05B5", + "reversedtilde": "\u223D", + "reviahebrew": "\u0597", + "reviamugrashhebrew": "\u0597", + "revlogicalnot": "\u2310", + "rfishhook": "\u027E", + "rfishhookreversed": "\u027F", + "rhabengali": "\u09DD", + "rhadeva": "\u095D", + "rho": "\u03C1", + "rhook": "\u027D", + "rhookturned": "\u027B", + "rhookturnedsuperior": "\u02B5", + "rhosymbolgreek": "\u03F1", + "rhotichookmod": "\u02DE", + "rieulacirclekorean": "\u3271", + "rieulaparenkorean": "\u3211", + "rieulcirclekorean": "\u3263", + "rieulhieuhkorean": "\u3140", + "rieulkiyeokkorean": "\u313A", + "rieulkiyeoksioskorean": "\u3169", + "rieulkorean": "\u3139", + "rieulmieumkorean": "\u313B", + "rieulpansioskorean": "\u316C", + "rieulparenkorean": "\u3203", + "rieulphieuphkorean": "\u313F", + "rieulpieupkorean": "\u313C", + "rieulpieupsioskorean": "\u316B", + "rieulsioskorean": "\u313D", + "rieulthieuthkorean": "\u313E", + "rieultikeutkorean": "\u316A", + "rieulyeorinhieuhkorean": "\u316D", + "rightangle": "\u221F", + "righttackbelowcmb": "\u0319", + "righttriangle": "\u22BF", + "rihiragana": "\u308A", + "rikatakana": "\u30EA", + "rikatakanahalfwidth": "\uFF98", + "ring": "\u02DA", + "ringbelowcmb": "\u0325", + "ringcmb": "\u030A", + "ringhalfleft": "\u02BF", + "ringhalfleftarmenian": "\u0559", + "ringhalfleftbelowcmb": "\u031C", + "ringhalfleftcentered": "\u02D3", + "ringhalfright": "\u02BE", + "ringhalfrightbelowcmb": "\u0339", + "ringhalfrightcentered": "\u02D2", + "rinvertedbreve": "\u0213", + "rittorusquare": "\u3351", + "rlinebelow": "\u1E5F", + "rlongleg": "\u027C", + "rlonglegturned": "\u027A", + "rmonospace": "\uFF52", + "rohiragana": "\u308D", + "rokatakana": "\u30ED", + "rokatakanahalfwidth": "\uFF9B", + "roruathai": "\u0E23", + "rparen": "\u24AD", + "rrabengali": "\u09DC", + "rradeva": "\u0931", + "rragurmukhi": "\u0A5C", + "rreharabic": "\u0691", + "rrehfinalarabic": "\uFB8D", + "rrvocalicbengali": "\u09E0", + "rrvocalicdeva": "\u0960", + "rrvocalicgujarati": "\u0AE0", + "rrvocalicvowelsignbengali": "\u09C4", + "rrvocalicvowelsigndeva": "\u0944", + "rrvocalicvowelsigngujarati": "\u0AC4", + "rsuperior": "\uF6F1", + "rtblock": "\u2590", + "rturned": "\u0279", + "rturnedsuperior": "\u02B4", + "ruhiragana": "\u308B", + "rukatakana": "\u30EB", + "rukatakanahalfwidth": "\uFF99", + "rupeemarkbengali": "\u09F2", + "rupeesignbengali": "\u09F3", + "rupiah": "\uF6DD", + "ruthai": "\u0E24", + "rvocalicbengali": "\u098B", + "rvocalicdeva": "\u090B", + "rvocalicgujarati": "\u0A8B", + "rvocalicvowelsignbengali": "\u09C3", + "rvocalicvowelsigndeva": "\u0943", + "rvocalicvowelsigngujarati": "\u0AC3", + "s": "\u0073", + "sabengali": "\u09B8", + "sacute": "\u015B", + "sacutedotaccent": "\u1E65", + "sadarabic": "\u0635", + "sadeva": "\u0938", + "sadfinalarabic": "\uFEBA", + "sadinitialarabic": "\uFEBB", + "sadmedialarabic": "\uFEBC", + "sagujarati": "\u0AB8", + "sagurmukhi": "\u0A38", + "sahiragana": "\u3055", + "sakatakana": "\u30B5", + "sakatakanahalfwidth": "\uFF7B", + "sallallahoualayhewasallamarabic": "\uFDFA", + "samekh": "\u05E1", + "samekhdagesh": "\uFB41", + "samekhdageshhebrew": "\uFB41", + "samekhhebrew": "\u05E1", + "saraaathai": "\u0E32", + "saraaethai": "\u0E41", + "saraaimaimalaithai": "\u0E44", + "saraaimaimuanthai": "\u0E43", + "saraamthai": "\u0E33", + "saraathai": "\u0E30", + "saraethai": "\u0E40", + "saraiileftthai": "\uF886", + "saraiithai": "\u0E35", + "saraileftthai": "\uF885", + "saraithai": "\u0E34", + "saraothai": "\u0E42", + "saraueeleftthai": "\uF888", + "saraueethai": "\u0E37", + "saraueleftthai": "\uF887", + "sarauethai": "\u0E36", + "sarauthai": "\u0E38", + "sarauuthai": "\u0E39", + "sbopomofo": "\u3119", + "scaron": "\u0161", + "scarondotaccent": "\u1E67", + "scedilla": "\u015F", + "schwa": "\u0259", + "schwacyrillic": "\u04D9", + "schwadieresiscyrillic": "\u04DB", + "schwahook": "\u025A", + "scircle": "\u24E2", + "scircumflex": "\u015D", + "scommaaccent": "\u0219", + "sdotaccent": "\u1E61", + "sdotbelow": "\u1E63", + "sdotbelowdotaccent": "\u1E69", + "seagullbelowcmb": "\u033C", + "second": "\u2033", + "secondtonechinese": "\u02CA", + "section": "\u00A7", + "seenarabic": "\u0633", + "seenfinalarabic": "\uFEB2", + "seeninitialarabic": "\uFEB3", + "seenmedialarabic": "\uFEB4", + "segol": "\u05B6", + "segol13": "\u05B6", + "segol1f": "\u05B6", + "segol2c": "\u05B6", + "segolhebrew": "\u05B6", + "segolnarrowhebrew": "\u05B6", + "segolquarterhebrew": "\u05B6", + "segoltahebrew": "\u0592", + "segolwidehebrew": "\u05B6", + "seharmenian": "\u057D", + "sehiragana": "\u305B", + "sekatakana": "\u30BB", + "sekatakanahalfwidth": "\uFF7E", + "semicolon": "\u003B", + "semicolonarabic": "\u061B", + "semicolonmonospace": "\uFF1B", + "semicolonsmall": "\uFE54", + "semivoicedmarkkana": "\u309C", + "semivoicedmarkkanahalfwidth": "\uFF9F", + "sentisquare": "\u3322", + "sentosquare": "\u3323", + "seven": "\u0037", + "sevenarabic": "\u0667", + "sevenbengali": "\u09ED", + "sevencircle": "\u2466", + "sevencircleinversesansserif": "\u2790", + "sevendeva": "\u096D", + "seveneighths": "\u215E", + "sevengujarati": "\u0AED", + "sevengurmukhi": "\u0A6D", + "sevenhackarabic": "\u0667", + "sevenhangzhou": "\u3027", + "sevenideographicparen": "\u3226", + "seveninferior": "\u2087", + "sevenmonospace": "\uFF17", + "sevenoldstyle": "\uF737", + "sevenparen": "\u247A", + "sevenperiod": "\u248E", + "sevenpersian": "\u06F7", + "sevenroman": "\u2176", + "sevensuperior": "\u2077", + "seventeencircle": "\u2470", + "seventeenparen": "\u2484", + "seventeenperiod": "\u2498", + "seventhai": "\u0E57", + "sfthyphen": "\u00AD", + "shaarmenian": "\u0577", + "shabengali": "\u09B6", + "shacyrillic": "\u0448", + "shaddaarabic": "\u0651", + "shaddadammaarabic": "\uFC61", + "shaddadammatanarabic": "\uFC5E", + "shaddafathaarabic": "\uFC60", + "shaddafathatanarabic": "\u0651\u064B", + "shaddakasraarabic": "\uFC62", + "shaddakasratanarabic": "\uFC5F", + "shade": "\u2592", + "shadedark": "\u2593", + "shadelight": "\u2591", + "shademedium": "\u2592", + "shadeva": "\u0936", + "shagujarati": "\u0AB6", + "shagurmukhi": "\u0A36", + "shalshelethebrew": "\u0593", + "shbopomofo": "\u3115", + "shchacyrillic": "\u0449", + "sheenarabic": "\u0634", + "sheenfinalarabic": "\uFEB6", + "sheeninitialarabic": "\uFEB7", + "sheenmedialarabic": "\uFEB8", + "sheicoptic": "\u03E3", + "sheqel": "\u20AA", + "sheqelhebrew": "\u20AA", + "sheva": "\u05B0", + "sheva115": "\u05B0", + "sheva15": "\u05B0", + "sheva22": "\u05B0", + "sheva2e": "\u05B0", + "shevahebrew": "\u05B0", + "shevanarrowhebrew": "\u05B0", + "shevaquarterhebrew": "\u05B0", + "shevawidehebrew": "\u05B0", + "shhacyrillic": "\u04BB", + "shimacoptic": "\u03ED", + "shin": "\u05E9", + "shindagesh": "\uFB49", + "shindageshhebrew": "\uFB49", + "shindageshshindot": "\uFB2C", + "shindageshshindothebrew": "\uFB2C", + "shindageshsindot": "\uFB2D", + "shindageshsindothebrew": "\uFB2D", + "shindothebrew": "\u05C1", + "shinhebrew": "\u05E9", + "shinshindot": "\uFB2A", + "shinshindothebrew": "\uFB2A", + "shinsindot": "\uFB2B", + "shinsindothebrew": "\uFB2B", + "shook": "\u0282", + "sigma": "\u03C3", + "sigma1": "\u03C2", + "sigmafinal": "\u03C2", + "sigmalunatesymbolgreek": "\u03F2", + "sihiragana": "\u3057", + "sikatakana": "\u30B7", + "sikatakanahalfwidth": "\uFF7C", + "siluqhebrew": "\u05BD", + "siluqlefthebrew": "\u05BD", + "similar": "\u223C", + "sindothebrew": "\u05C2", + "siosacirclekorean": "\u3274", + "siosaparenkorean": "\u3214", + "sioscieuckorean": "\u317E", + "sioscirclekorean": "\u3266", + "sioskiyeokkorean": "\u317A", + "sioskorean": "\u3145", + "siosnieunkorean": "\u317B", + "siosparenkorean": "\u3206", + "siospieupkorean": "\u317D", + "siostikeutkorean": "\u317C", + "six": "\u0036", + "sixarabic": "\u0666", + "sixbengali": "\u09EC", + "sixcircle": "\u2465", + "sixcircleinversesansserif": "\u278F", + "sixdeva": "\u096C", + "sixgujarati": "\u0AEC", + "sixgurmukhi": "\u0A6C", + "sixhackarabic": "\u0666", + "sixhangzhou": "\u3026", + "sixideographicparen": "\u3225", + "sixinferior": "\u2086", + "sixmonospace": "\uFF16", + "sixoldstyle": "\uF736", + "sixparen": "\u2479", + "sixperiod": "\u248D", + "sixpersian": "\u06F6", + "sixroman": "\u2175", + "sixsuperior": "\u2076", + "sixteencircle": "\u246F", + "sixteencurrencydenominatorbengali": "\u09F9", + "sixteenparen": "\u2483", + "sixteenperiod": "\u2497", + "sixthai": "\u0E56", + "slash": "\u002F", + "slashmonospace": "\uFF0F", + "slong": "\u017F", + "slongdotaccent": "\u1E9B", + "smileface": "\u263A", + "smonospace": "\uFF53", + "sofpasuqhebrew": "\u05C3", + "softhyphen": "\u00AD", + "softsigncyrillic": "\u044C", + "sohiragana": "\u305D", + "sokatakana": "\u30BD", + "sokatakanahalfwidth": "\uFF7F", + "soliduslongoverlaycmb": "\u0338", + "solidusshortoverlaycmb": "\u0337", + "sorusithai": "\u0E29", + "sosalathai": "\u0E28", + "sosothai": "\u0E0B", + "sosuathai": "\u0E2A", + "space": "\u0020", + "spacehackarabic": "\u0020", + "spade": "\u2660", + "spadesuitblack": "\u2660", + "spadesuitwhite": "\u2664", + "sparen": "\u24AE", + "squarebelowcmb": "\u033B", + "squarecc": "\u33C4", + "squarecm": "\u339D", + "squarediagonalcrosshatchfill": "\u25A9", + "squarehorizontalfill": "\u25A4", + "squarekg": "\u338F", + "squarekm": "\u339E", + "squarekmcapital": "\u33CE", + "squareln": "\u33D1", + "squarelog": "\u33D2", + "squaremg": "\u338E", + "squaremil": "\u33D5", + "squaremm": "\u339C", + "squaremsquared": "\u33A1", + "squareorthogonalcrosshatchfill": "\u25A6", + "squareupperlefttolowerrightfill": "\u25A7", + "squareupperrighttolowerleftfill": "\u25A8", + "squareverticalfill": "\u25A5", + "squarewhitewithsmallblack": "\u25A3", + "srsquare": "\u33DB", + "ssabengali": "\u09B7", + "ssadeva": "\u0937", + "ssagujarati": "\u0AB7", + "ssangcieuckorean": "\u3149", + "ssanghieuhkorean": "\u3185", + "ssangieungkorean": "\u3180", + "ssangkiyeokkorean": "\u3132", + "ssangnieunkorean": "\u3165", + "ssangpieupkorean": "\u3143", + "ssangsioskorean": "\u3146", + "ssangtikeutkorean": "\u3138", + "ssuperior": "\uF6F2", + "sterling": "\u00A3", + "sterlingmonospace": "\uFFE1", + "strokelongoverlaycmb": "\u0336", + "strokeshortoverlaycmb": "\u0335", + "subset": "\u2282", + "subsetnotequal": "\u228A", + "subsetorequal": "\u2286", + "succeeds": "\u227B", + "suchthat": "\u220B", + "suhiragana": "\u3059", + "sukatakana": "\u30B9", + "sukatakanahalfwidth": "\uFF7D", + "sukunarabic": "\u0652", + "summation": "\u2211", + "sun": "\u263C", + "superset": "\u2283", + "supersetnotequal": "\u228B", + "supersetorequal": "\u2287", + "svsquare": "\u33DC", + "syouwaerasquare": "\u337C", + "t": "\u0074", + "tabengali": "\u09A4", + "tackdown": "\u22A4", + "tackleft": "\u22A3", + "tadeva": "\u0924", + "tagujarati": "\u0AA4", + "tagurmukhi": "\u0A24", + "taharabic": "\u0637", + "tahfinalarabic": "\uFEC2", + "tahinitialarabic": "\uFEC3", + "tahiragana": "\u305F", + "tahmedialarabic": "\uFEC4", + "taisyouerasquare": "\u337D", + "takatakana": "\u30BF", + "takatakanahalfwidth": "\uFF80", + "tatweelarabic": "\u0640", + "tau": "\u03C4", + "tav": "\u05EA", + "tavdages": "\uFB4A", + "tavdagesh": "\uFB4A", + "tavdageshhebrew": "\uFB4A", + "tavhebrew": "\u05EA", + "tbar": "\u0167", + "tbopomofo": "\u310A", + "tcaron": "\u0165", + "tccurl": "\u02A8", + "tcedilla": "\u0163", + "tcheharabic": "\u0686", + "tchehfinalarabic": "\uFB7B", + "tchehinitialarabic": "\uFB7C", + "tchehmedialarabic": "\uFB7D", + "tchehmeeminitialarabic": "\uFB7C\uFEE4", + "tcircle": "\u24E3", + "tcircumflexbelow": "\u1E71", + "tcommaaccent": "\u0163", + "tdieresis": "\u1E97", + "tdotaccent": "\u1E6B", + "tdotbelow": "\u1E6D", + "tecyrillic": "\u0442", + "tedescendercyrillic": "\u04AD", + "teharabic": "\u062A", + "tehfinalarabic": "\uFE96", + "tehhahinitialarabic": "\uFCA2", + "tehhahisolatedarabic": "\uFC0C", + "tehinitialarabic": "\uFE97", + "tehiragana": "\u3066", + "tehjeeminitialarabic": "\uFCA1", + "tehjeemisolatedarabic": "\uFC0B", + "tehmarbutaarabic": "\u0629", + "tehmarbutafinalarabic": "\uFE94", + "tehmedialarabic": "\uFE98", + "tehmeeminitialarabic": "\uFCA4", + "tehmeemisolatedarabic": "\uFC0E", + "tehnoonfinalarabic": "\uFC73", + "tekatakana": "\u30C6", + "tekatakanahalfwidth": "\uFF83", + "telephone": "\u2121", + "telephoneblack": "\u260E", + "telishagedolahebrew": "\u05A0", + "telishaqetanahebrew": "\u05A9", + "tencircle": "\u2469", + "tenideographicparen": "\u3229", + "tenparen": "\u247D", + "tenperiod": "\u2491", + "tenroman": "\u2179", + "tesh": "\u02A7", + "tet": "\u05D8", + "tetdagesh": "\uFB38", + "tetdageshhebrew": "\uFB38", + "tethebrew": "\u05D8", + "tetsecyrillic": "\u04B5", + "tevirhebrew": "\u059B", + "tevirlefthebrew": "\u059B", + "thabengali": "\u09A5", + "thadeva": "\u0925", + "thagujarati": "\u0AA5", + "thagurmukhi": "\u0A25", + "thalarabic": "\u0630", + "thalfinalarabic": "\uFEAC", + "thanthakhatlowleftthai": "\uF898", + "thanthakhatlowrightthai": "\uF897", + "thanthakhatthai": "\u0E4C", + "thanthakhatupperleftthai": "\uF896", + "theharabic": "\u062B", + "thehfinalarabic": "\uFE9A", + "thehinitialarabic": "\uFE9B", + "thehmedialarabic": "\uFE9C", + "thereexists": "\u2203", + "therefore": "\u2234", + "theta": "\u03B8", + "theta1": "\u03D1", + "thetasymbolgreek": "\u03D1", + "thieuthacirclekorean": "\u3279", + "thieuthaparenkorean": "\u3219", + "thieuthcirclekorean": "\u326B", + "thieuthkorean": "\u314C", + "thieuthparenkorean": "\u320B", + "thirteencircle": "\u246C", + "thirteenparen": "\u2480", + "thirteenperiod": "\u2494", + "thonangmonthothai": "\u0E11", + "thook": "\u01AD", + "thophuthaothai": "\u0E12", + "thorn": "\u00FE", + "thothahanthai": "\u0E17", + "thothanthai": "\u0E10", + "thothongthai": "\u0E18", + "thothungthai": "\u0E16", + "thousandcyrillic": "\u0482", + "thousandsseparatorarabic": "\u066C", + "thousandsseparatorpersian": "\u066C", + "three": "\u0033", + "threearabic": "\u0663", + "threebengali": "\u09E9", + "threecircle": "\u2462", + "threecircleinversesansserif": "\u278C", + "threedeva": "\u0969", + "threeeighths": "\u215C", + "threegujarati": "\u0AE9", + "threegurmukhi": "\u0A69", + "threehackarabic": "\u0663", + "threehangzhou": "\u3023", + "threeideographicparen": "\u3222", + "threeinferior": "\u2083", + "threemonospace": "\uFF13", + "threenumeratorbengali": "\u09F6", + "threeoldstyle": "\uF733", + "threeparen": "\u2476", + "threeperiod": "\u248A", + "threepersian": "\u06F3", + "threequarters": "\u00BE", + "threequartersemdash": "\uF6DE", + "threeroman": "\u2172", + "threesuperior": "\u00B3", + "threethai": "\u0E53", + "thzsquare": "\u3394", + "tihiragana": "\u3061", + "tikatakana": "\u30C1", + "tikatakanahalfwidth": "\uFF81", + "tikeutacirclekorean": "\u3270", + "tikeutaparenkorean": "\u3210", + "tikeutcirclekorean": "\u3262", + "tikeutkorean": "\u3137", + "tikeutparenkorean": "\u3202", + "tilde": "\u02DC", + "tildebelowcmb": "\u0330", + "tildecmb": "\u0303", + "tildecomb": "\u0303", + "tildedoublecmb": "\u0360", + "tildeoperator": "\u223C", + "tildeoverlaycmb": "\u0334", + "tildeverticalcmb": "\u033E", + "timescircle": "\u2297", + "tipehahebrew": "\u0596", + "tipehalefthebrew": "\u0596", + "tippigurmukhi": "\u0A70", + "titlocyrilliccmb": "\u0483", + "tiwnarmenian": "\u057F", + "tlinebelow": "\u1E6F", + "tmonospace": "\uFF54", + "toarmenian": "\u0569", + "tohiragana": "\u3068", + "tokatakana": "\u30C8", + "tokatakanahalfwidth": "\uFF84", + "tonebarextrahighmod": "\u02E5", + "tonebarextralowmod": "\u02E9", + "tonebarhighmod": "\u02E6", + "tonebarlowmod": "\u02E8", + "tonebarmidmod": "\u02E7", + "tonefive": "\u01BD", + "tonesix": "\u0185", + "tonetwo": "\u01A8", + "tonos": "\u0384", + "tonsquare": "\u3327", + "topatakthai": "\u0E0F", + "tortoiseshellbracketleft": "\u3014", + "tortoiseshellbracketleftsmall": "\uFE5D", + "tortoiseshellbracketleftvertical": "\uFE39", + "tortoiseshellbracketright": "\u3015", + "tortoiseshellbracketrightsmall": "\uFE5E", + "tortoiseshellbracketrightvertical": "\uFE3A", + "totaothai": "\u0E15", + "tpalatalhook": "\u01AB", + "tparen": "\u24AF", + "trademark": "\u2122", + "trademarksans": "\uF8EA", + "trademarkserif": "\uF6DB", + "tretroflexhook": "\u0288", + "triagdn": "\u25BC", + "triaglf": "\u25C4", + "triagrt": "\u25BA", + "triagup": "\u25B2", + "ts": "\u02A6", + "tsadi": "\u05E6", + "tsadidagesh": "\uFB46", + "tsadidageshhebrew": "\uFB46", + "tsadihebrew": "\u05E6", + "tsecyrillic": "\u0446", + "tsere": "\u05B5", + "tsere12": "\u05B5", + "tsere1e": "\u05B5", + "tsere2b": "\u05B5", + "tserehebrew": "\u05B5", + "tserenarrowhebrew": "\u05B5", + "tserequarterhebrew": "\u05B5", + "tserewidehebrew": "\u05B5", + "tshecyrillic": "\u045B", + "tsuperior": "\uF6F3", + "ttabengali": "\u099F", + "ttadeva": "\u091F", + "ttagujarati": "\u0A9F", + "ttagurmukhi": "\u0A1F", + "tteharabic": "\u0679", + "ttehfinalarabic": "\uFB67", + "ttehinitialarabic": "\uFB68", + "ttehmedialarabic": "\uFB69", + "tthabengali": "\u09A0", + "tthadeva": "\u0920", + "tthagujarati": "\u0AA0", + "tthagurmukhi": "\u0A20", + "tturned": "\u0287", + "tuhiragana": "\u3064", + "tukatakana": "\u30C4", + "tukatakanahalfwidth": "\uFF82", + "tusmallhiragana": "\u3063", + "tusmallkatakana": "\u30C3", + "tusmallkatakanahalfwidth": "\uFF6F", + "twelvecircle": "\u246B", + "twelveparen": "\u247F", + "twelveperiod": "\u2493", + "twelveroman": "\u217B", + "twentycircle": "\u2473", + "twentyhangzhou": "\u5344", + "twentyparen": "\u2487", + "twentyperiod": "\u249B", + "two": "\u0032", + "twoarabic": "\u0662", + "twobengali": "\u09E8", + "twocircle": "\u2461", + "twocircleinversesansserif": "\u278B", + "twodeva": "\u0968", + "twodotenleader": "\u2025", + "twodotleader": "\u2025", + "twodotleadervertical": "\uFE30", + "twogujarati": "\u0AE8", + "twogurmukhi": "\u0A68", + "twohackarabic": "\u0662", + "twohangzhou": "\u3022", + "twoideographicparen": "\u3221", + "twoinferior": "\u2082", + "twomonospace": "\uFF12", + "twonumeratorbengali": "\u09F5", + "twooldstyle": "\uF732", + "twoparen": "\u2475", + "twoperiod": "\u2489", + "twopersian": "\u06F2", + "tworoman": "\u2171", + "twostroke": "\u01BB", + "twosuperior": "\u00B2", + "twothai": "\u0E52", + "twothirds": "\u2154", + "u": "\u0075", + "uacute": "\u00FA", + "ubar": "\u0289", + "ubengali": "\u0989", + "ubopomofo": "\u3128", + "ubreve": "\u016D", + "ucaron": "\u01D4", + "ucircle": "\u24E4", + "ucircumflex": "\u00FB", + "ucircumflexbelow": "\u1E77", + "ucyrillic": "\u0443", + "udattadeva": "\u0951", + "udblacute": "\u0171", + "udblgrave": "\u0215", + "udeva": "\u0909", + "udieresis": "\u00FC", + "udieresisacute": "\u01D8", + "udieresisbelow": "\u1E73", + "udieresiscaron": "\u01DA", + "udieresiscyrillic": "\u04F1", + "udieresisgrave": "\u01DC", + "udieresismacron": "\u01D6", + "udotbelow": "\u1EE5", + "ugrave": "\u00F9", + "ugujarati": "\u0A89", + "ugurmukhi": "\u0A09", + "uhiragana": "\u3046", + "uhookabove": "\u1EE7", + "uhorn": "\u01B0", + "uhornacute": "\u1EE9", + "uhorndotbelow": "\u1EF1", + "uhorngrave": "\u1EEB", + "uhornhookabove": "\u1EED", + "uhorntilde": "\u1EEF", + "uhungarumlaut": "\u0171", + "uhungarumlautcyrillic": "\u04F3", + "uinvertedbreve": "\u0217", + "ukatakana": "\u30A6", + "ukatakanahalfwidth": "\uFF73", + "ukcyrillic": "\u0479", + "ukorean": "\u315C", + "umacron": "\u016B", + "umacroncyrillic": "\u04EF", + "umacrondieresis": "\u1E7B", + "umatragurmukhi": "\u0A41", + "umonospace": "\uFF55", + "underscore": "\u005F", + "underscoredbl": "\u2017", + "underscoremonospace": "\uFF3F", + "underscorevertical": "\uFE33", + "underscorewavy": "\uFE4F", + "union": "\u222A", + "universal": "\u2200", + "uogonek": "\u0173", + "uparen": "\u24B0", + "upblock": "\u2580", + "upperdothebrew": "\u05C4", + "upsilon": "\u03C5", + "upsilondieresis": "\u03CB", + "upsilondieresistonos": "\u03B0", + "upsilonlatin": "\u028A", + "upsilontonos": "\u03CD", + "uptackbelowcmb": "\u031D", + "uptackmod": "\u02D4", + "uragurmukhi": "\u0A73", + "uring": "\u016F", + "ushortcyrillic": "\u045E", + "usmallhiragana": "\u3045", + "usmallkatakana": "\u30A5", + "usmallkatakanahalfwidth": "\uFF69", + "ustraightcyrillic": "\u04AF", + "ustraightstrokecyrillic": "\u04B1", + "utilde": "\u0169", + "utildeacute": "\u1E79", + "utildebelow": "\u1E75", + "uubengali": "\u098A", + "uudeva": "\u090A", + "uugujarati": "\u0A8A", + "uugurmukhi": "\u0A0A", + "uumatragurmukhi": "\u0A42", + "uuvowelsignbengali": "\u09C2", + "uuvowelsigndeva": "\u0942", + "uuvowelsigngujarati": "\u0AC2", + "uvowelsignbengali": "\u09C1", + "uvowelsigndeva": "\u0941", + "uvowelsigngujarati": "\u0AC1", + "v": "\u0076", + "vadeva": "\u0935", + "vagujarati": "\u0AB5", + "vagurmukhi": "\u0A35", + "vakatakana": "\u30F7", + "vav": "\u05D5", + "vavdagesh": "\uFB35", + "vavdagesh65": "\uFB35", + "vavdageshhebrew": "\uFB35", + "vavhebrew": "\u05D5", + "vavholam": "\uFB4B", + "vavholamhebrew": "\uFB4B", + "vavvavhebrew": "\u05F0", + "vavyodhebrew": "\u05F1", + "vcircle": "\u24E5", + "vdotbelow": "\u1E7F", + "vecyrillic": "\u0432", + "veharabic": "\u06A4", + "vehfinalarabic": "\uFB6B", + "vehinitialarabic": "\uFB6C", + "vehmedialarabic": "\uFB6D", + "vekatakana": "\u30F9", + "venus": "\u2640", + "verticalbar": "\u007C", + "verticallineabovecmb": "\u030D", + "verticallinebelowcmb": "\u0329", + "verticallinelowmod": "\u02CC", + "verticallinemod": "\u02C8", + "vewarmenian": "\u057E", + "vhook": "\u028B", + "vikatakana": "\u30F8", + "viramabengali": "\u09CD", + "viramadeva": "\u094D", + "viramagujarati": "\u0ACD", + "visargabengali": "\u0983", + "visargadeva": "\u0903", + "visargagujarati": "\u0A83", + "vmonospace": "\uFF56", + "voarmenian": "\u0578", + "voicediterationhiragana": "\u309E", + "voicediterationkatakana": "\u30FE", + "voicedmarkkana": "\u309B", + "voicedmarkkanahalfwidth": "\uFF9E", + "vokatakana": "\u30FA", + "vparen": "\u24B1", + "vtilde": "\u1E7D", + "vturned": "\u028C", + "vuhiragana": "\u3094", + "vukatakana": "\u30F4", + "w": "\u0077", + "wacute": "\u1E83", + "waekorean": "\u3159", + "wahiragana": "\u308F", + "wakatakana": "\u30EF", + "wakatakanahalfwidth": "\uFF9C", + "wakorean": "\u3158", + "wasmallhiragana": "\u308E", + "wasmallkatakana": "\u30EE", + "wattosquare": "\u3357", + "wavedash": "\u301C", + "wavyunderscorevertical": "\uFE34", + "wawarabic": "\u0648", + "wawfinalarabic": "\uFEEE", + "wawhamzaabovearabic": "\u0624", + "wawhamzaabovefinalarabic": "\uFE86", + "wbsquare": "\u33DD", + "wcircle": "\u24E6", + "wcircumflex": "\u0175", + "wdieresis": "\u1E85", + "wdotaccent": "\u1E87", + "wdotbelow": "\u1E89", + "wehiragana": "\u3091", + "weierstrass": "\u2118", + "wekatakana": "\u30F1", + "wekorean": "\u315E", + "weokorean": "\u315D", + "wgrave": "\u1E81", + "whitebullet": "\u25E6", + "whitecircle": "\u25CB", + "whitecircleinverse": "\u25D9", + "whitecornerbracketleft": "\u300E", + "whitecornerbracketleftvertical": "\uFE43", + "whitecornerbracketright": "\u300F", + "whitecornerbracketrightvertical": "\uFE44", + "whitediamond": "\u25C7", + "whitediamondcontainingblacksmalldiamond": "\u25C8", + "whitedownpointingsmalltriangle": "\u25BF", + "whitedownpointingtriangle": "\u25BD", + "whiteleftpointingsmalltriangle": "\u25C3", + "whiteleftpointingtriangle": "\u25C1", + "whitelenticularbracketleft": "\u3016", + "whitelenticularbracketright": "\u3017", + "whiterightpointingsmalltriangle": "\u25B9", + "whiterightpointingtriangle": "\u25B7", + "whitesmallsquare": "\u25AB", + "whitesmilingface": "\u263A", + "whitesquare": "\u25A1", + "whitestar": "\u2606", + "whitetelephone": "\u260F", + "whitetortoiseshellbracketleft": "\u3018", + "whitetortoiseshellbracketright": "\u3019", + "whiteuppointingsmalltriangle": "\u25B5", + "whiteuppointingtriangle": "\u25B3", + "wihiragana": "\u3090", + "wikatakana": "\u30F0", + "wikorean": "\u315F", + "wmonospace": "\uFF57", + "wohiragana": "\u3092", + "wokatakana": "\u30F2", + "wokatakanahalfwidth": "\uFF66", + "won": "\u20A9", + "wonmonospace": "\uFFE6", + "wowaenthai": "\u0E27", + "wparen": "\u24B2", + "wring": "\u1E98", + "wsuperior": "\u02B7", + "wturned": "\u028D", + "wynn": "\u01BF", + "x": "\u0078", + "xabovecmb": "\u033D", + "xbopomofo": "\u3112", + "xcircle": "\u24E7", + "xdieresis": "\u1E8D", + "xdotaccent": "\u1E8B", + "xeharmenian": "\u056D", + "xi": "\u03BE", + "xmonospace": "\uFF58", + "xparen": "\u24B3", + "xsuperior": "\u02E3", + "y": "\u0079", + "yaadosquare": "\u334E", + "yabengali": "\u09AF", + "yacute": "\u00FD", + "yadeva": "\u092F", + "yaekorean": "\u3152", + "yagujarati": "\u0AAF", + "yagurmukhi": "\u0A2F", + "yahiragana": "\u3084", + "yakatakana": "\u30E4", + "yakatakanahalfwidth": "\uFF94", + "yakorean": "\u3151", + "yamakkanthai": "\u0E4E", + "yasmallhiragana": "\u3083", + "yasmallkatakana": "\u30E3", + "yasmallkatakanahalfwidth": "\uFF6C", + "yatcyrillic": "\u0463", + "ycircle": "\u24E8", + "ycircumflex": "\u0177", + "ydieresis": "\u00FF", + "ydotaccent": "\u1E8F", + "ydotbelow": "\u1EF5", + "yeharabic": "\u064A", + "yehbarreearabic": "\u06D2", + "yehbarreefinalarabic": "\uFBAF", + "yehfinalarabic": "\uFEF2", + "yehhamzaabovearabic": "\u0626", + "yehhamzaabovefinalarabic": "\uFE8A", + "yehhamzaaboveinitialarabic": "\uFE8B", + "yehhamzaabovemedialarabic": "\uFE8C", + "yehinitialarabic": "\uFEF3", + "yehmedialarabic": "\uFEF4", + "yehmeeminitialarabic": "\uFCDD", + "yehmeemisolatedarabic": "\uFC58", + "yehnoonfinalarabic": "\uFC94", + "yehthreedotsbelowarabic": "\u06D1", + "yekorean": "\u3156", + "yen": "\u00A5", + "yenmonospace": "\uFFE5", + "yeokorean": "\u3155", + "yeorinhieuhkorean": "\u3186", + "yerahbenyomohebrew": "\u05AA", + "yerahbenyomolefthebrew": "\u05AA", + "yericyrillic": "\u044B", + "yerudieresiscyrillic": "\u04F9", + "yesieungkorean": "\u3181", + "yesieungpansioskorean": "\u3183", + "yesieungsioskorean": "\u3182", + "yetivhebrew": "\u059A", + "ygrave": "\u1EF3", + "yhook": "\u01B4", + "yhookabove": "\u1EF7", + "yiarmenian": "\u0575", + "yicyrillic": "\u0457", + "yikorean": "\u3162", + "yinyang": "\u262F", + "yiwnarmenian": "\u0582", + "ymonospace": "\uFF59", + "yod": "\u05D9", + "yoddagesh": "\uFB39", + "yoddageshhebrew": "\uFB39", + "yodhebrew": "\u05D9", + "yodyodhebrew": "\u05F2", + "yodyodpatahhebrew": "\uFB1F", + "yohiragana": "\u3088", + "yoikorean": "\u3189", + "yokatakana": "\u30E8", + "yokatakanahalfwidth": "\uFF96", + "yokorean": "\u315B", + "yosmallhiragana": "\u3087", + "yosmallkatakana": "\u30E7", + "yosmallkatakanahalfwidth": "\uFF6E", + "yotgreek": "\u03F3", + "yoyaekorean": "\u3188", + "yoyakorean": "\u3187", + "yoyakthai": "\u0E22", + "yoyingthai": "\u0E0D", + "yparen": "\u24B4", + "ypogegrammeni": "\u037A", + "ypogegrammenigreekcmb": "\u0345", + "yr": "\u01A6", + "yring": "\u1E99", + "ysuperior": "\u02B8", + "ytilde": "\u1EF9", + "yturned": "\u028E", + "yuhiragana": "\u3086", + "yuikorean": "\u318C", + "yukatakana": "\u30E6", + "yukatakanahalfwidth": "\uFF95", + "yukorean": "\u3160", + "yusbigcyrillic": "\u046B", + "yusbigiotifiedcyrillic": "\u046D", + "yuslittlecyrillic": "\u0467", + "yuslittleiotifiedcyrillic": "\u0469", + "yusmallhiragana": "\u3085", + "yusmallkatakana": "\u30E5", + "yusmallkatakanahalfwidth": "\uFF6D", + "yuyekorean": "\u318B", + "yuyeokorean": "\u318A", + "yyabengali": "\u09DF", + "yyadeva": "\u095F", + "z": "\u007A", + "zaarmenian": "\u0566", + "zacute": "\u017A", + "zadeva": "\u095B", + "zagurmukhi": "\u0A5B", + "zaharabic": "\u0638", + "zahfinalarabic": "\uFEC6", + "zahinitialarabic": "\uFEC7", + "zahiragana": "\u3056", + "zahmedialarabic": "\uFEC8", + "zainarabic": "\u0632", + "zainfinalarabic": "\uFEB0", + "zakatakana": "\u30B6", + "zaqefgadolhebrew": "\u0595", + "zaqefqatanhebrew": "\u0594", + "zarqahebrew": "\u0598", + "zayin": "\u05D6", + "zayindagesh": "\uFB36", + "zayindageshhebrew": "\uFB36", + "zayinhebrew": "\u05D6", + "zbopomofo": "\u3117", + "zcaron": "\u017E", + "zcircle": "\u24E9", + "zcircumflex": "\u1E91", + "zcurl": "\u0291", + "zdot": "\u017C", + "zdotaccent": "\u017C", + "zdotbelow": "\u1E93", + "zecyrillic": "\u0437", + "zedescendercyrillic": "\u0499", + "zedieresiscyrillic": "\u04DF", + "zehiragana": "\u305C", + "zekatakana": "\u30BC", + "zero": "\u0030", + "zeroarabic": "\u0660", + "zerobengali": "\u09E6", + "zerodeva": "\u0966", + "zerogujarati": "\u0AE6", + "zerogurmukhi": "\u0A66", + "zerohackarabic": "\u0660", + "zeroinferior": "\u2080", + "zeromonospace": "\uFF10", + "zerooldstyle": "\uF730", + "zeropersian": "\u06F0", + "zerosuperior": "\u2070", + "zerothai": "\u0E50", + "zerowidthjoiner": "\uFEFF", + "zerowidthnonjoiner": "\u200C", + "zerowidthspace": "\u200B", + "zeta": "\u03B6", + "zhbopomofo": "\u3113", + "zhearmenian": "\u056A", + "zhebrevecyrillic": "\u04C2", + "zhecyrillic": "\u0436", + "zhedescendercyrillic": "\u0497", + "zhedieresiscyrillic": "\u04DD", + "zihiragana": "\u3058", + "zikatakana": "\u30B8", + "zinorhebrew": "\u05AE", + "zlinebelow": "\u1E95", + "zmonospace": "\uFF5A", + "zohiragana": "\u305E", + "zokatakana": "\u30BE", + "zparen": "\u24B5", + "zretroflexhook": "\u0290", + "zstroke": "\u01B6", + "zuhiragana": "\u305A", + "zukatakana": "\u30BA", +} +# --end diff --git a/templates/skills/file_manager/dependencies/pdfminer/high_level.py b/templates/skills/file_manager/dependencies/pdfminer/high_level.py new file mode 100644 index 00000000..93ec2ed1 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/high_level.py @@ -0,0 +1,214 @@ +"""Functions that can be used for the most common use-cases for pdfminer.six""" + +import logging +import sys +from io import StringIO +from typing import Any, BinaryIO, Container, Iterator, Optional, cast + +from .pdfexceptions import PDFValueError +from .converter import ( + XMLConverter, + HTMLConverter, + TextConverter, + PDFPageAggregator, + HOCRConverter, +) +from .image import ImageWriter +from .layout import LAParams, LTPage +from .pdfdevice import PDFDevice, TagExtractor +from .pdfinterp import PDFResourceManager, PDFPageInterpreter +from .pdfpage import PDFPage +from .utils import open_filename, FileOrName, AnyIO + + +def extract_text_to_fp( + inf: BinaryIO, + outfp: AnyIO, + output_type: str = "text", + codec: str = "utf-8", + laparams: Optional[LAParams] = None, + maxpages: int = 0, + page_numbers: Optional[Container[int]] = None, + password: str = "", + scale: float = 1.0, + rotation: int = 0, + layoutmode: str = "normal", + output_dir: Optional[str] = None, + strip_control: bool = False, + debug: bool = False, + disable_caching: bool = False, + **kwargs: Any, +) -> None: + """Parses text from inf-file and writes to outfp file-like object. + + Takes loads of optional arguments but the defaults are somewhat sane. + Beware laparams: Including an empty LAParams is not the same as passing + None! + + :param inf: a file-like object to read PDF structure from, such as a + file handler (using the builtin `open()` function) or a `BytesIO`. + :param outfp: a file-like object to write the text to. + :param output_type: May be 'text', 'xml', 'html', 'hocr', 'tag'. + Only 'text' works properly. + :param codec: Text decoding codec + :param laparams: An LAParams object from pdfminer.layout. Default is None + but may not layout correctly. + :param maxpages: How many pages to stop parsing after + :param page_numbers: zero-indexed page numbers to operate on. + :param password: For encrypted PDFs, the password to decrypt. + :param scale: Scale factor + :param rotation: Rotation factor + :param layoutmode: Default is 'normal', see + pdfminer.converter.HTMLConverter + :param output_dir: If given, creates an ImageWriter for extracted images. + :param strip_control: Does what it says on the tin + :param debug: Output more logging data + :param disable_caching: Does what it says on the tin + :param other: + :return: nothing, acting as it does on two streams. Use StringIO to get + strings. + """ + if debug: + logging.getLogger().setLevel(logging.DEBUG) + + imagewriter = None + if output_dir: + imagewriter = ImageWriter(output_dir) + + rsrcmgr = PDFResourceManager(caching=not disable_caching) + device: Optional[PDFDevice] = None + + if output_type != "text" and outfp == sys.stdout: + outfp = sys.stdout.buffer + + if output_type == "text": + device = TextConverter( + rsrcmgr, outfp, codec=codec, laparams=laparams, imagewriter=imagewriter + ) + + elif output_type == "xml": + device = XMLConverter( + rsrcmgr, + outfp, + codec=codec, + laparams=laparams, + imagewriter=imagewriter, + stripcontrol=strip_control, + ) + + elif output_type == "html": + device = HTMLConverter( + rsrcmgr, + outfp, + codec=codec, + scale=scale, + layoutmode=layoutmode, + laparams=laparams, + imagewriter=imagewriter, + ) + + elif output_type == "hocr": + device = HOCRConverter( + rsrcmgr, outfp, codec=codec, laparams=laparams, stripcontrol=strip_control + ) + + elif output_type == "tag": + # Binary I/O is required, but we have no good way to test it here. + device = TagExtractor(rsrcmgr, cast(BinaryIO, outfp), codec=codec) + + else: + msg = f"Output type can be text, html, xml or tag but is " f"{output_type}" + raise PDFValueError(msg) + + assert device is not None + interpreter = PDFPageInterpreter(rsrcmgr, device) + for page in PDFPage.get_pages( + inf, + page_numbers, + maxpages=maxpages, + password=password, + caching=not disable_caching, + ): + page.rotate = (page.rotate + rotation) % 360 + interpreter.process_page(page) + + device.close() + + +def extract_text( + pdf_file: FileOrName, + password: str = "", + page_numbers: Optional[Container[int]] = None, + maxpages: int = 0, + caching: bool = True, + codec: str = "utf-8", + laparams: Optional[LAParams] = None, +) -> str: + """Parse and return the text contained in a PDF file. + + :param pdf_file: Either a file path or a file-like object for the PDF file + to be worked on. + :param password: For encrypted PDFs, the password to decrypt. + :param page_numbers: List of zero-indexed page numbers to extract. + :param maxpages: The maximum number of pages to parse + :param caching: If resources should be cached + :param codec: Text decoding codec + :param laparams: An LAParams object from pdfminer.layout. If None, uses + some default settings that often work well. + :return: a string containing all of the text extracted. + """ + if laparams is None: + laparams = LAParams() + + with open_filename(pdf_file, "rb") as fp, StringIO() as output_string: + fp = cast(BinaryIO, fp) # we opened in binary mode + rsrcmgr = PDFResourceManager(caching=caching) + device = TextConverter(rsrcmgr, output_string, codec=codec, laparams=laparams) + interpreter = PDFPageInterpreter(rsrcmgr, device) + + for page in PDFPage.get_pages( + fp, + page_numbers, + maxpages=maxpages, + password=password, + caching=caching, + ): + interpreter.process_page(page) + + return output_string.getvalue() + + +def extract_pages( + pdf_file: FileOrName, + password: str = "", + page_numbers: Optional[Container[int]] = None, + maxpages: int = 0, + caching: bool = True, + laparams: Optional[LAParams] = None, +) -> Iterator[LTPage]: + """Extract and yield LTPage objects + + :param pdf_file: Either a file path or a file-like object for the PDF file + to be worked on. + :param password: For encrypted PDFs, the password to decrypt. + :param page_numbers: List of zero-indexed page numbers to extract. + :param maxpages: The maximum number of pages to parse + :param caching: If resources should be cached + :param laparams: An LAParams object from pdfminer.layout. If None, uses + some default settings that often work well. + :return: LTPage objects + """ + if laparams is None: + laparams = LAParams() + + with open_filename(pdf_file, "rb") as fp: + fp = cast(BinaryIO, fp) # we opened in binary mode + resource_manager = PDFResourceManager(caching=caching) + device = PDFPageAggregator(resource_manager, laparams=laparams) + interpreter = PDFPageInterpreter(resource_manager, device) + for page in PDFPage.get_pages( + fp, page_numbers, maxpages=maxpages, password=password, caching=caching + ): + interpreter.process_page(page) + layout = device.get_result() + yield layout diff --git a/templates/skills/file_manager/dependencies/pdfminer/image.py b/templates/skills/file_manager/dependencies/pdfminer/image.py new file mode 100644 index 00000000..52aec4bb --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/image.py @@ -0,0 +1,274 @@ +import os +import os.path +import struct +from io import BytesIO +from typing import BinaryIO, Tuple + +try: + from typing import Literal +except ImportError: + # Literal was introduced in Python 3.8 + from typing_extensions import Literal # type: ignore[assignment] + +from .jbig2 import JBIG2StreamReader, JBIG2StreamWriter +from .layout import LTImage +from .pdfcolor import LITERAL_DEVICE_CMYK +from .pdfcolor import LITERAL_DEVICE_GRAY +from .pdfcolor import LITERAL_DEVICE_RGB +from .pdftypes import ( + LITERALS_DCT_DECODE, + LITERALS_JBIG2_DECODE, + LITERALS_JPX_DECODE, + LITERALS_FLATE_DECODE, +) +from .pdfexceptions import PDFValueError + +PIL_ERROR_MESSAGE = ( + "Could not import Pillow. This dependency of pdfminer.six is not " + "installed by default. You need it to to save jpg images to a file. Install it " + "with `pip install 'pdfminer.six[image]'`" +) + + +def align32(x: int) -> int: + return ((x + 3) // 4) * 4 + + +class BMPWriter: + def __init__(self, fp: BinaryIO, bits: int, width: int, height: int) -> None: + self.fp = fp + self.bits = bits + self.width = width + self.height = height + if bits == 1: + ncols = 2 + elif bits == 8: + ncols = 256 + elif bits == 24: + ncols = 0 + else: + raise PDFValueError(bits) + self.linesize = align32((self.width * self.bits + 7) // 8) + self.datasize = self.linesize * self.height + headersize = 14 + 40 + ncols * 4 + info = struct.pack( + " None: + self.fp.seek(self.pos1 - (y + 1) * self.linesize) + self.fp.write(data) + + +class ImageWriter: + """Write image to a file + + Supports various image types: JPEG, JBIG2 and bitmaps + """ + + def __init__(self, outdir: str) -> None: + self.outdir = outdir + if not os.path.exists(self.outdir): + os.makedirs(self.outdir) + + def export_image(self, image: LTImage) -> str: + """Save an LTImage to disk""" + (width, height) = image.srcsize + + filters = image.stream.get_filters() + + if filters[-1][0] in LITERALS_DCT_DECODE: + name = self._save_jpeg(image) + + elif filters[-1][0] in LITERALS_JPX_DECODE: + name = self._save_jpeg2000(image) + + elif self._is_jbig2_iamge(image): + name = self._save_jbig2(image) + + elif image.bits == 1: + name = self._save_bmp(image, width, height, (width + 7) // 8, image.bits) + + elif image.bits == 8 and LITERAL_DEVICE_RGB in image.colorspace: + name = self._save_bmp(image, width, height, width * 3, image.bits * 3) + + elif image.bits == 8 and LITERAL_DEVICE_GRAY in image.colorspace: + name = self._save_bmp(image, width, height, width, image.bits) + + elif len(filters) == 1 and filters[0][0] in LITERALS_FLATE_DECODE: + name = self._save_bytes(image) + + else: + name = self._save_raw(image) + + return name + + def _save_jpeg(self, image: LTImage) -> str: + """Save a JPEG encoded image""" + data = image.stream.get_data() + + name, path = self._create_unique_image_name(image, ".jpg") + with open(path, "wb") as fp: + if LITERAL_DEVICE_CMYK in image.colorspace: + try: + from PIL import Image, ImageChops # type: ignore[import] + except ImportError: + raise ImportError(PIL_ERROR_MESSAGE) + + ifp = BytesIO(data) + i = Image.open(ifp) + i = ImageChops.invert(i) + i = i.convert("RGB") + i.save(fp, "JPEG") + else: + fp.write(data) + + return name + + def _save_jpeg2000(self, image: LTImage) -> str: + """Save a JPEG 2000 encoded image""" + data = image.stream.get_data() + + name, path = self._create_unique_image_name(image, ".jp2") + with open(path, "wb") as fp: + try: + from PIL import Image # type: ignore[import] + except ImportError: + raise ImportError(PIL_ERROR_MESSAGE) + + # if we just write the raw data, most image programs + # that I have tried cannot open the file. However, + # open and saving with PIL produces a file that + # seems to be easily opened by other programs + ifp = BytesIO(data) + i = Image.open(ifp) + i.save(fp, "JPEG2000") + return name + + def _save_jbig2(self, image: LTImage) -> str: + """Save a JBIG2 encoded image""" + name, path = self._create_unique_image_name(image, ".jb2") + with open(path, "wb") as fp: + input_stream = BytesIO() + + global_streams = [] + filters = image.stream.get_filters() + for filter_name, params in filters: + if filter_name in LITERALS_JBIG2_DECODE: + global_streams.append(params["JBIG2Globals"].resolve()) + + if len(global_streams) > 1: + msg = ( + "There should never be more than one JBIG2Globals " + "associated with a JBIG2 embedded image" + ) + raise PDFValueError(msg) + if len(global_streams) == 1: + input_stream.write(global_streams[0].get_data().rstrip(b"\n")) + input_stream.write(image.stream.get_data()) + input_stream.seek(0) + reader = JBIG2StreamReader(input_stream) + segments = reader.get_segments() + + writer = JBIG2StreamWriter(fp) + writer.write_file(segments) + return name + + def _save_bmp( + self, image: LTImage, width: int, height: int, bytes_per_line: int, bits: int + ) -> str: + """Save a BMP encoded image""" + name, path = self._create_unique_image_name(image, ".bmp") + with open(path, "wb") as fp: + bmp = BMPWriter(fp, bits, width, height) + data = image.stream.get_data() + i = 0 + for y in range(height): + bmp.write_line(y, data[i : i + bytes_per_line]) + i += bytes_per_line + return name + + def _save_bytes(self, image: LTImage) -> str: + """Save an image without encoding, just bytes""" + name, path = self._create_unique_image_name(image, ".jpg") + width, height = image.srcsize + channels = len(image.stream.get_data()) / width / height / (image.bits / 8) + with open(path, "wb") as fp: + try: + from PIL import Image # type: ignore[import] + from PIL import ImageOps + except ImportError: + raise ImportError(PIL_ERROR_MESSAGE) + + mode: Literal["1", "L", "RGB", "CMYK"] + if image.bits == 1: + mode = "1" + elif image.bits == 8 and channels == 1: + mode = "L" + elif image.bits == 8 and channels == 3: + mode = "RGB" + elif image.bits == 8 and channels == 4: + mode = "CMYK" + + img = Image.frombytes(mode, image.srcsize, image.stream.get_data(), "raw") + if mode == "L": + img = ImageOps.invert(img) + + img.save(fp) + + return name + + def _save_raw(self, image: LTImage) -> str: + """Save an image with unknown encoding""" + ext = ".%d.%dx%d.img" % (image.bits, image.srcsize[0], image.srcsize[1]) + name, path = self._create_unique_image_name(image, ext) + + with open(path, "wb") as fp: + fp.write(image.stream.get_data()) + return name + + @staticmethod + def _is_jbig2_iamge(image: LTImage) -> bool: + filters = image.stream.get_filters() + for filter_name, params in filters: + if filter_name in LITERALS_JBIG2_DECODE: + return True + return False + + def _create_unique_image_name(self, image: LTImage, ext: str) -> Tuple[str, str]: + name = image.name + ext + path = os.path.join(self.outdir, name) + img_index = 0 + while os.path.exists(path): + name = "%s.%d%s" % (image.name, img_index, ext) + path = os.path.join(self.outdir, name) + img_index += 1 + return name, path diff --git a/templates/skills/file_manager/dependencies/pdfminer/jbig2.py b/templates/skills/file_manager/dependencies/pdfminer/jbig2.py new file mode 100644 index 00000000..f1e4f7ac --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/jbig2.py @@ -0,0 +1,358 @@ +import math +import os +from struct import pack, unpack, calcsize +from typing import BinaryIO, Dict, Iterable, List, Optional, Tuple, Union, cast + +from .pdfexceptions import PDFValueError + +# segment structure base +SEG_STRUCT = [ + (">L", "number"), + (">B", "flags"), + (">B", "retention_flags"), + (">B", "page_assoc"), + (">L", "data_length"), +] + +# segment header literals +HEADER_FLAG_DEFERRED = 0b10000000 +HEADER_FLAG_PAGE_ASSOC_LONG = 0b01000000 + +SEG_TYPE_MASK = 0b00111111 + +REF_COUNT_SHORT_MASK = 0b11100000 +REF_COUNT_LONG_MASK = 0x1FFFFFFF +REF_COUNT_LONG = 7 + +DATA_LEN_UNKNOWN = 0xFFFFFFFF + +# segment types +SEG_TYPE_IMMEDIATE_GEN_REGION = 38 +SEG_TYPE_END_OF_PAGE = 49 +SEG_TYPE_END_OF_FILE = 51 + +# file literals +FILE_HEADER_ID = b"\x97\x4A\x42\x32\x0D\x0A\x1A\x0A" +FILE_HEAD_FLAG_SEQUENTIAL = 0b00000001 + + +def bit_set(bit_pos: int, value: int) -> bool: + return bool((value >> bit_pos) & 1) + + +def check_flag(flag: int, value: int) -> bool: + return bool(flag & value) + + +def masked_value(mask: int, value: int) -> int: + for bit_pos in range(0, 31): + if bit_set(bit_pos, mask): + return (value & mask) >> bit_pos + + raise PDFValueError("Invalid mask or value") + + +def mask_value(mask: int, value: int) -> int: + for bit_pos in range(0, 31): + if bit_set(bit_pos, mask): + return (value & (mask >> bit_pos)) << bit_pos + + raise PDFValueError("Invalid mask or value") + + +def unpack_int(format: str, buffer: bytes) -> int: + assert format in {">B", ">I", ">L"} + [result] = cast(Tuple[int], unpack(format, buffer)) + return result + + +JBIG2SegmentFlags = Dict[str, Union[int, bool]] +JBIG2RetentionFlags = Dict[str, Union[int, List[int], List[bool]]] +JBIG2Segment = Dict[ + str, Union[bool, int, bytes, JBIG2SegmentFlags, JBIG2RetentionFlags] +] + + +class JBIG2StreamReader: + """Read segments from a JBIG2 byte stream""" + + def __init__(self, stream: BinaryIO) -> None: + self.stream = stream + + def get_segments(self) -> List[JBIG2Segment]: + segments: List[JBIG2Segment] = [] + while not self.is_eof(): + segment: JBIG2Segment = {} + for field_format, name in SEG_STRUCT: + field_len = calcsize(field_format) + field = self.stream.read(field_len) + if len(field) < field_len: + segment["_error"] = True + break + value = unpack_int(field_format, field) + parser = getattr(self, "parse_%s" % name, None) + if callable(parser): + value = parser(segment, value, field) + segment[name] = value + + if not segment.get("_error"): + segments.append(segment) + return segments + + def is_eof(self) -> bool: + if self.stream.read(1) == b"": + return True + else: + self.stream.seek(-1, os.SEEK_CUR) + return False + + def parse_flags( + self, segment: JBIG2Segment, flags: int, field: bytes + ) -> JBIG2SegmentFlags: + return { + "deferred": check_flag(HEADER_FLAG_DEFERRED, flags), + "page_assoc_long": check_flag(HEADER_FLAG_PAGE_ASSOC_LONG, flags), + "type": masked_value(SEG_TYPE_MASK, flags), + } + + def parse_retention_flags( + self, segment: JBIG2Segment, flags: int, field: bytes + ) -> JBIG2RetentionFlags: + ref_count = masked_value(REF_COUNT_SHORT_MASK, flags) + retain_segments = [] + ref_segments = [] + + if ref_count < REF_COUNT_LONG: + for bit_pos in range(5): + retain_segments.append(bit_set(bit_pos, flags)) + else: + field += self.stream.read(3) + ref_count = unpack_int(">L", field) + ref_count = masked_value(REF_COUNT_LONG_MASK, ref_count) + ret_bytes_count = int(math.ceil((ref_count + 1) / 8)) + for ret_byte_index in range(ret_bytes_count): + ret_byte = unpack_int(">B", self.stream.read(1)) + for bit_pos in range(7): + retain_segments.append(bit_set(bit_pos, ret_byte)) + + seg_num = segment["number"] + assert isinstance(seg_num, int) + if seg_num <= 256: + ref_format = ">B" + elif seg_num <= 65536: + ref_format = ">I" + else: + ref_format = ">L" + + ref_size = calcsize(ref_format) + + for ref_index in range(ref_count): + ref_data = self.stream.read(ref_size) + ref = unpack_int(ref_format, ref_data) + ref_segments.append(ref) + + return { + "ref_count": ref_count, + "retain_segments": retain_segments, + "ref_segments": ref_segments, + } + + def parse_page_assoc(self, segment: JBIG2Segment, page: int, field: bytes) -> int: + if cast(JBIG2SegmentFlags, segment["flags"])["page_assoc_long"]: + field += self.stream.read(3) + page = unpack_int(">L", field) + return page + + def parse_data_length( + self, segment: JBIG2Segment, length: int, field: bytes + ) -> int: + if length: + if ( + cast(JBIG2SegmentFlags, segment["flags"])["type"] + == SEG_TYPE_IMMEDIATE_GEN_REGION + ) and (length == DATA_LEN_UNKNOWN): + + raise NotImplementedError( + "Working with unknown segment length " "is not implemented yet" + ) + else: + segment["raw_data"] = self.stream.read(length) + + return length + + +class JBIG2StreamWriter: + """Write JBIG2 segments to a file in JBIG2 format""" + + EMPTY_RETENTION_FLAGS: JBIG2RetentionFlags = { + "ref_count": 0, + "ref_segments": cast(List[int], []), + "retain_segments": cast(List[bool], []), + } + + def __init__(self, stream: BinaryIO) -> None: + self.stream = stream + + def write_segments( + self, segments: Iterable[JBIG2Segment], fix_last_page: bool = True + ) -> int: + data_len = 0 + current_page: Optional[int] = None + seg_num: Optional[int] = None + + for segment in segments: + data = self.encode_segment(segment) + self.stream.write(data) + data_len += len(data) + + seg_num = cast(Optional[int], segment["number"]) + + if fix_last_page: + seg_page = cast(int, segment.get("page_assoc")) + + if ( + cast(JBIG2SegmentFlags, segment["flags"])["type"] + == SEG_TYPE_END_OF_PAGE + ): + current_page = None + elif seg_page: + current_page = seg_page + + if fix_last_page and current_page and (seg_num is not None): + segment = self.get_eop_segment(seg_num + 1, current_page) + data = self.encode_segment(segment) + self.stream.write(data) + data_len += len(data) + + return data_len + + def write_file( + self, segments: Iterable[JBIG2Segment], fix_last_page: bool = True + ) -> int: + header = FILE_HEADER_ID + header_flags = FILE_HEAD_FLAG_SEQUENTIAL + header += pack(">B", header_flags) + # The embedded JBIG2 files in a PDF always + # only have one page + number_of_pages = pack(">L", 1) + header += number_of_pages + self.stream.write(header) + data_len = len(header) + + data_len += self.write_segments(segments, fix_last_page) + + seg_num = 0 + for segment in segments: + seg_num = cast(int, segment["number"]) + + if fix_last_page: + seg_num_offset = 2 + else: + seg_num_offset = 1 + eof_segment = self.get_eof_segment(seg_num + seg_num_offset) + data = self.encode_segment(eof_segment) + + self.stream.write(data) + data_len += len(data) + + return data_len + + def encode_segment(self, segment: JBIG2Segment) -> bytes: + data = b"" + for field_format, name in SEG_STRUCT: + value = segment.get(name) + encoder = getattr(self, "encode_%s" % name, None) + if callable(encoder): + field = encoder(value, segment) + else: + field = pack(field_format, value) + data += field + return data + + def encode_flags(self, value: JBIG2SegmentFlags, segment: JBIG2Segment) -> bytes: + flags = 0 + if value.get("deferred"): + flags |= HEADER_FLAG_DEFERRED + + if "page_assoc_long" in value: + flags |= HEADER_FLAG_PAGE_ASSOC_LONG if value["page_assoc_long"] else flags + else: + flags |= ( + HEADER_FLAG_PAGE_ASSOC_LONG + if cast(int, segment.get("page", 0)) > 255 + else flags + ) + + flags |= mask_value(SEG_TYPE_MASK, value["type"]) + + return pack(">B", flags) + + def encode_retention_flags( + self, value: JBIG2RetentionFlags, segment: JBIG2Segment + ) -> bytes: + flags = [] + flags_format = ">B" + ref_count = value["ref_count"] + assert isinstance(ref_count, int) + retain_segments = cast(List[bool], value.get("retain_segments", [])) + + if ref_count <= 4: + flags_byte = mask_value(REF_COUNT_SHORT_MASK, ref_count) + for ref_index, ref_retain in enumerate(retain_segments): + if ref_retain: + flags_byte |= 1 << ref_index + flags.append(flags_byte) + else: + bytes_count = math.ceil((ref_count + 1) / 8) + flags_format = ">L" + ("B" * bytes_count) + flags_dword = mask_value(REF_COUNT_SHORT_MASK, REF_COUNT_LONG) << 24 + flags.append(flags_dword) + + for byte_index in range(bytes_count): + ret_byte = 0 + ret_part = retain_segments[byte_index * 8 : byte_index * 8 + 8] + for bit_pos, ret_seg in enumerate(ret_part): + ret_byte |= 1 << bit_pos if ret_seg else ret_byte + + flags.append(ret_byte) + + ref_segments = cast(List[int], value.get("ref_segments", [])) + + seg_num = cast(int, segment["number"]) + if seg_num <= 256: + ref_format = "B" + elif seg_num <= 65536: + ref_format = "I" + else: + ref_format = "L" + + for ref in ref_segments: + flags_format += ref_format + flags.append(ref) + + return pack(flags_format, *flags) + + def encode_data_length(self, value: int, segment: JBIG2Segment) -> bytes: + data = pack(">L", value) + data += cast(bytes, segment["raw_data"]) + return data + + def get_eop_segment(self, seg_number: int, page_number: int) -> JBIG2Segment: + return { + "data_length": 0, + "flags": {"deferred": False, "type": SEG_TYPE_END_OF_PAGE}, + "number": seg_number, + "page_assoc": page_number, + "raw_data": b"", + "retention_flags": JBIG2StreamWriter.EMPTY_RETENTION_FLAGS, + } + + def get_eof_segment(self, seg_number: int) -> JBIG2Segment: + return { + "data_length": 0, + "flags": {"deferred": False, "type": SEG_TYPE_END_OF_FILE}, + "number": seg_number, + "page_assoc": 0, + "raw_data": b"", + "retention_flags": JBIG2StreamWriter.EMPTY_RETENTION_FLAGS, + } diff --git a/templates/skills/file_manager/dependencies/pdfminer/latin_enc.py b/templates/skills/file_manager/dependencies/pdfminer/latin_enc.py new file mode 100644 index 00000000..6238745e --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/latin_enc.py @@ -0,0 +1,246 @@ +""" Standard encoding tables used in PDF. + +This table is extracted from PDF Reference Manual 1.6, pp.925 + "D.1 Latin Character Set and Encodings" + +""" + +from typing import List, Optional, Tuple + +EncodingRow = Tuple[str, Optional[int], Optional[int], Optional[int], Optional[int]] + +ENCODING: List[EncodingRow] = [ + # (name, std, mac, win, pdf) + ("A", 65, 65, 65, 65), + ("AE", 225, 174, 198, 198), + ("Aacute", None, 231, 193, 193), + ("Acircumflex", None, 229, 194, 194), + ("Adieresis", None, 128, 196, 196), + ("Agrave", None, 203, 192, 192), + ("Aring", None, 129, 197, 197), + ("Atilde", None, 204, 195, 195), + ("B", 66, 66, 66, 66), + ("C", 67, 67, 67, 67), + ("Ccedilla", None, 130, 199, 199), + ("D", 68, 68, 68, 68), + ("E", 69, 69, 69, 69), + ("Eacute", None, 131, 201, 201), + ("Ecircumflex", None, 230, 202, 202), + ("Edieresis", None, 232, 203, 203), + ("Egrave", None, 233, 200, 200), + ("Eth", None, None, 208, 208), + ("Euro", None, None, 128, 160), + ("F", 70, 70, 70, 70), + ("G", 71, 71, 71, 71), + ("H", 72, 72, 72, 72), + ("I", 73, 73, 73, 73), + ("Iacute", None, 234, 205, 205), + ("Icircumflex", None, 235, 206, 206), + ("Idieresis", None, 236, 207, 207), + ("Igrave", None, 237, 204, 204), + ("J", 74, 74, 74, 74), + ("K", 75, 75, 75, 75), + ("L", 76, 76, 76, 76), + ("Lslash", 232, None, None, 149), + ("M", 77, 77, 77, 77), + ("N", 78, 78, 78, 78), + ("Ntilde", None, 132, 209, 209), + ("O", 79, 79, 79, 79), + ("OE", 234, 206, 140, 150), + ("Oacute", None, 238, 211, 211), + ("Ocircumflex", None, 239, 212, 212), + ("Odieresis", None, 133, 214, 214), + ("Ograve", None, 241, 210, 210), + ("Oslash", 233, 175, 216, 216), + ("Otilde", None, 205, 213, 213), + ("P", 80, 80, 80, 80), + ("Q", 81, 81, 81, 81), + ("R", 82, 82, 82, 82), + ("S", 83, 83, 83, 83), + ("Scaron", None, None, 138, 151), + ("T", 84, 84, 84, 84), + ("Thorn", None, None, 222, 222), + ("U", 85, 85, 85, 85), + ("Uacute", None, 242, 218, 218), + ("Ucircumflex", None, 243, 219, 219), + ("Udieresis", None, 134, 220, 220), + ("Ugrave", None, 244, 217, 217), + ("V", 86, 86, 86, 86), + ("W", 87, 87, 87, 87), + ("X", 88, 88, 88, 88), + ("Y", 89, 89, 89, 89), + ("Yacute", None, None, 221, 221), + ("Ydieresis", None, 217, 159, 152), + ("Z", 90, 90, 90, 90), + ("Zcaron", None, None, 142, 153), + ("a", 97, 97, 97, 97), + ("aacute", None, 135, 225, 225), + ("acircumflex", None, 137, 226, 226), + ("acute", 194, 171, 180, 180), + ("adieresis", None, 138, 228, 228), + ("ae", 241, 190, 230, 230), + ("agrave", None, 136, 224, 224), + ("ampersand", 38, 38, 38, 38), + ("aring", None, 140, 229, 229), + ("asciicircum", 94, 94, 94, 94), + ("asciitilde", 126, 126, 126, 126), + ("asterisk", 42, 42, 42, 42), + ("at", 64, 64, 64, 64), + ("atilde", None, 139, 227, 227), + ("b", 98, 98, 98, 98), + ("backslash", 92, 92, 92, 92), + ("bar", 124, 124, 124, 124), + ("braceleft", 123, 123, 123, 123), + ("braceright", 125, 125, 125, 125), + ("bracketleft", 91, 91, 91, 91), + ("bracketright", 93, 93, 93, 93), + ("breve", 198, 249, None, 24), + ("brokenbar", None, None, 166, 166), + ("bullet", 183, 165, 149, 128), + ("c", 99, 99, 99, 99), + ("caron", 207, 255, None, 25), + ("ccedilla", None, 141, 231, 231), + ("cedilla", 203, 252, 184, 184), + ("cent", 162, 162, 162, 162), + ("circumflex", 195, 246, 136, 26), + ("colon", 58, 58, 58, 58), + ("comma", 44, 44, 44, 44), + ("copyright", None, 169, 169, 169), + ("currency", 168, 219, 164, 164), + ("d", 100, 100, 100, 100), + ("dagger", 178, 160, 134, 129), + ("daggerdbl", 179, 224, 135, 130), + ("degree", None, 161, 176, 176), + ("dieresis", 200, 172, 168, 168), + ("divide", None, 214, 247, 247), + ("dollar", 36, 36, 36, 36), + ("dotaccent", 199, 250, None, 27), + ("dotlessi", 245, 245, None, 154), + ("e", 101, 101, 101, 101), + ("eacute", None, 142, 233, 233), + ("ecircumflex", None, 144, 234, 234), + ("edieresis", None, 145, 235, 235), + ("egrave", None, 143, 232, 232), + ("eight", 56, 56, 56, 56), + ("ellipsis", 188, 201, 133, 131), + ("emdash", 208, 209, 151, 132), + ("endash", 177, 208, 150, 133), + ("equal", 61, 61, 61, 61), + ("eth", None, None, 240, 240), + ("exclam", 33, 33, 33, 33), + ("exclamdown", 161, 193, 161, 161), + ("f", 102, 102, 102, 102), + ("fi", 174, 222, None, 147), + ("five", 53, 53, 53, 53), + ("fl", 175, 223, None, 148), + ("florin", 166, 196, 131, 134), + ("four", 52, 52, 52, 52), + ("fraction", 164, 218, None, 135), + ("g", 103, 103, 103, 103), + ("germandbls", 251, 167, 223, 223), + ("grave", 193, 96, 96, 96), + ("greater", 62, 62, 62, 62), + ("guillemotleft", 171, 199, 171, 171), + ("guillemotright", 187, 200, 187, 187), + ("guilsinglleft", 172, 220, 139, 136), + ("guilsinglright", 173, 221, 155, 137), + ("h", 104, 104, 104, 104), + ("hungarumlaut", 205, 253, None, 28), + ("hyphen", 45, 45, 45, 45), + ("i", 105, 105, 105, 105), + ("iacute", None, 146, 237, 237), + ("icircumflex", None, 148, 238, 238), + ("idieresis", None, 149, 239, 239), + ("igrave", None, 147, 236, 236), + ("j", 106, 106, 106, 106), + ("k", 107, 107, 107, 107), + ("l", 108, 108, 108, 108), + ("less", 60, 60, 60, 60), + ("logicalnot", None, 194, 172, 172), + ("lslash", 248, None, None, 155), + ("m", 109, 109, 109, 109), + ("macron", 197, 248, 175, 175), + ("minus", None, None, None, 138), + ("mu", None, 181, 181, 181), + ("multiply", None, None, 215, 215), + ("n", 110, 110, 110, 110), + ("nbspace", None, 202, 160, None), + ("nine", 57, 57, 57, 57), + ("ntilde", None, 150, 241, 241), + ("numbersign", 35, 35, 35, 35), + ("o", 111, 111, 111, 111), + ("oacute", None, 151, 243, 243), + ("ocircumflex", None, 153, 244, 244), + ("odieresis", None, 154, 246, 246), + ("oe", 250, 207, 156, 156), + ("ogonek", 206, 254, None, 29), + ("ograve", None, 152, 242, 242), + ("one", 49, 49, 49, 49), + ("onehalf", None, None, 189, 189), + ("onequarter", None, None, 188, 188), + ("onesuperior", None, None, 185, 185), + ("ordfeminine", 227, 187, 170, 170), + ("ordmasculine", 235, 188, 186, 186), + ("oslash", 249, 191, 248, 248), + ("otilde", None, 155, 245, 245), + ("p", 112, 112, 112, 112), + ("paragraph", 182, 166, 182, 182), + ("parenleft", 40, 40, 40, 40), + ("parenright", 41, 41, 41, 41), + ("percent", 37, 37, 37, 37), + ("period", 46, 46, 46, 46), + ("periodcentered", 180, 225, 183, 183), + ("perthousand", 189, 228, 137, 139), + ("plus", 43, 43, 43, 43), + ("plusminus", None, 177, 177, 177), + ("q", 113, 113, 113, 113), + ("question", 63, 63, 63, 63), + ("questiondown", 191, 192, 191, 191), + ("quotedbl", 34, 34, 34, 34), + ("quotedblbase", 185, 227, 132, 140), + ("quotedblleft", 170, 210, 147, 141), + ("quotedblright", 186, 211, 148, 142), + ("quoteleft", 96, 212, 145, 143), + ("quoteright", 39, 213, 146, 144), + ("quotesinglbase", 184, 226, 130, 145), + ("quotesingle", 169, 39, 39, 39), + ("r", 114, 114, 114, 114), + ("registered", None, 168, 174, 174), + ("ring", 202, 251, None, 30), + ("s", 115, 115, 115, 115), + ("scaron", None, None, 154, 157), + ("section", 167, 164, 167, 167), + ("semicolon", 59, 59, 59, 59), + ("seven", 55, 55, 55, 55), + ("six", 54, 54, 54, 54), + ("slash", 47, 47, 47, 47), + ("space", 32, 32, 32, 32), + ("space", None, 202, 160, None), + ("space", None, 202, 173, None), + ("sterling", 163, 163, 163, 163), + ("t", 116, 116, 116, 116), + ("thorn", None, None, 254, 254), + ("three", 51, 51, 51, 51), + ("threequarters", None, None, 190, 190), + ("threesuperior", None, None, 179, 179), + ("tilde", 196, 247, 152, 31), + ("trademark", None, 170, 153, 146), + ("two", 50, 50, 50, 50), + ("twosuperior", None, None, 178, 178), + ("u", 117, 117, 117, 117), + ("uacute", None, 156, 250, 250), + ("ucircumflex", None, 158, 251, 251), + ("udieresis", None, 159, 252, 252), + ("ugrave", None, 157, 249, 249), + ("underscore", 95, 95, 95, 95), + ("v", 118, 118, 118, 118), + ("w", 119, 119, 119, 119), + ("x", 120, 120, 120, 120), + ("y", 121, 121, 121, 121), + ("yacute", None, None, 253, 253), + ("ydieresis", None, 216, 255, 255), + ("yen", 165, 180, 165, 165), + ("z", 122, 122, 122, 122), + ("zcaron", None, None, 158, 158), + ("zero", 48, 48, 48, 48), +] diff --git a/templates/skills/file_manager/dependencies/pdfminer/layout.py b/templates/skills/file_manager/dependencies/pdfminer/layout.py new file mode 100644 index 00000000..e706f6e1 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/layout.py @@ -0,0 +1,1044 @@ +import heapq +import logging +from typing import ( + Dict, + Generic, + Iterable, + Iterator, + List, + Optional, + Sequence, + Set, + Tuple, + TypeVar, + Union, + cast, +) + +from .pdfcolor import PDFColorSpace +from .pdffont import PDFFont +from .pdfinterp import Color +from .pdfinterp import PDFGraphicState +from .pdftypes import PDFStream +from .pdfexceptions import PDFTypeError, PDFValueError +from .utils import INF, PathSegment +from .utils import LTComponentT +from .utils import Matrix +from .utils import Plane +from .utils import Point +from .utils import Rect +from .utils import apply_matrix_pt +from .utils import bbox2str +from .utils import fsplit +from .utils import get_bound +from .utils import matrix2str +from .utils import uniq + +logger = logging.getLogger(__name__) + + +class IndexAssigner: + def __init__(self, index: int = 0) -> None: + self.index = index + + def run(self, obj: "LTItem") -> None: + if isinstance(obj, LTTextBox): + obj.index = self.index + self.index += 1 + elif isinstance(obj, LTTextGroup): + for x in obj: + self.run(x) + + +class LAParams: + """Parameters for layout analysis + + :param line_overlap: If two characters have more overlap than this they + are considered to be on the same line. The overlap is specified + relative to the minimum height of both characters. + :param char_margin: If two characters are closer together than this + margin they are considered part of the same line. The margin is + specified relative to the width of the character. + :param word_margin: If two characters on the same line are further apart + than this margin then they are considered to be two separate words, and + an intermediate space will be added for readability. The margin is + specified relative to the width of the character. + :param line_margin: If two lines are are close together they are + considered to be part of the same paragraph. The margin is + specified relative to the height of a line. + :param boxes_flow: Specifies how much a horizontal and vertical position + of a text matters when determining the order of text boxes. The value + should be within the range of -1.0 (only horizontal position + matters) to +1.0 (only vertical position matters). You can also pass + `None` to disable advanced layout analysis, and instead return text + based on the position of the bottom left corner of the text box. + :param detect_vertical: If vertical text should be considered during + layout analysis + :param all_texts: If layout analysis should be performed on text in + figures. + """ + + def __init__( + self, + line_overlap: float = 0.5, + char_margin: float = 2.0, + line_margin: float = 0.5, + word_margin: float = 0.1, + boxes_flow: Optional[float] = 0.5, + detect_vertical: bool = False, + all_texts: bool = False, + ) -> None: + self.line_overlap = line_overlap + self.char_margin = char_margin + self.line_margin = line_margin + self.word_margin = word_margin + self.boxes_flow = boxes_flow + self.detect_vertical = detect_vertical + self.all_texts = all_texts + + self._validate() + + def _validate(self) -> None: + if self.boxes_flow is not None: + boxes_flow_err_msg = ( + "LAParam boxes_flow should be None, or a " "number between -1 and +1" + ) + if not ( + isinstance(self.boxes_flow, int) or isinstance(self.boxes_flow, float) + ): + raise PDFTypeError(boxes_flow_err_msg) + if not -1 <= self.boxes_flow <= 1: + raise PDFValueError(boxes_flow_err_msg) + + def __repr__(self) -> str: + return ( + "" + % (self.char_margin, self.line_margin, self.word_margin, self.all_texts) + ) + + +class LTItem: + """Interface for things that can be analyzed""" + + def analyze(self, laparams: LAParams) -> None: + """Perform the layout analysis.""" + pass + + +class LTText: + """Interface for things that have text""" + + def __repr__(self) -> str: + return "<{} {!r}>".format(self.__class__.__name__, self.get_text()) + + def get_text(self) -> str: + """Text contained in this object""" + raise NotImplementedError + + +class LTComponent(LTItem): + """Object with a bounding box""" + + def __init__(self, bbox: Rect) -> None: + LTItem.__init__(self) + self.set_bbox(bbox) + + def __repr__(self) -> str: + return "<{} {}>".format(self.__class__.__name__, bbox2str(self.bbox)) + + # Disable comparison. + def __lt__(self, _: object) -> bool: + raise PDFValueError + + def __le__(self, _: object) -> bool: + raise PDFValueError + + def __gt__(self, _: object) -> bool: + raise PDFValueError + + def __ge__(self, _: object) -> bool: + raise PDFValueError + + def set_bbox(self, bbox: Rect) -> None: + (x0, y0, x1, y1) = bbox + self.x0 = x0 + self.y0 = y0 + self.x1 = x1 + self.y1 = y1 + self.width = x1 - x0 + self.height = y1 - y0 + self.bbox = bbox + + def is_empty(self) -> bool: + return self.width <= 0 or self.height <= 0 + + def is_hoverlap(self, obj: "LTComponent") -> bool: + assert isinstance(obj, LTComponent), str(type(obj)) + return obj.x0 <= self.x1 and self.x0 <= obj.x1 + + def hdistance(self, obj: "LTComponent") -> float: + assert isinstance(obj, LTComponent), str(type(obj)) + if self.is_hoverlap(obj): + return 0 + else: + return min(abs(self.x0 - obj.x1), abs(self.x1 - obj.x0)) + + def hoverlap(self, obj: "LTComponent") -> float: + assert isinstance(obj, LTComponent), str(type(obj)) + if self.is_hoverlap(obj): + return min(abs(self.x0 - obj.x1), abs(self.x1 - obj.x0)) + else: + return 0 + + def is_voverlap(self, obj: "LTComponent") -> bool: + assert isinstance(obj, LTComponent), str(type(obj)) + return obj.y0 <= self.y1 and self.y0 <= obj.y1 + + def vdistance(self, obj: "LTComponent") -> float: + assert isinstance(obj, LTComponent), str(type(obj)) + if self.is_voverlap(obj): + return 0 + else: + return min(abs(self.y0 - obj.y1), abs(self.y1 - obj.y0)) + + def voverlap(self, obj: "LTComponent") -> float: + assert isinstance(obj, LTComponent), str(type(obj)) + if self.is_voverlap(obj): + return min(abs(self.y0 - obj.y1), abs(self.y1 - obj.y0)) + else: + return 0 + + +class LTCurve(LTComponent): + """ + A generic Bezier curve + + The parameter `original_path` contains the original + pathing information from the pdf (e.g. for reconstructing Bezier Curves). + + `dashing_style` contains the Dashing information if any. + """ + + def __init__( + self, + linewidth: float, + pts: List[Point], + stroke: bool = False, + fill: bool = False, + evenodd: bool = False, + stroking_color: Optional[Color] = None, + non_stroking_color: Optional[Color] = None, + original_path: Optional[List[PathSegment]] = None, + dashing_style: Optional[Tuple[object, object]] = None, + ) -> None: + LTComponent.__init__(self, get_bound(pts)) + self.pts = pts + self.linewidth = linewidth + self.stroke = stroke + self.fill = fill + self.evenodd = evenodd + self.stroking_color = stroking_color + self.non_stroking_color = non_stroking_color + self.original_path = original_path + self.dashing_style = dashing_style + + def get_pts(self) -> str: + return ",".join("%.3f,%.3f" % p for p in self.pts) + + +class LTLine(LTCurve): + """A single straight line. + + Could be used for separating text or figures. + """ + + def __init__( + self, + linewidth: float, + p0: Point, + p1: Point, + stroke: bool = False, + fill: bool = False, + evenodd: bool = False, + stroking_color: Optional[Color] = None, + non_stroking_color: Optional[Color] = None, + original_path: Optional[List[PathSegment]] = None, + dashing_style: Optional[Tuple[object, object]] = None, + ) -> None: + LTCurve.__init__( + self, + linewidth, + [p0, p1], + stroke, + fill, + evenodd, + stroking_color, + non_stroking_color, + original_path, + dashing_style, + ) + + +class LTRect(LTCurve): + """A rectangle. + + Could be used for framing another pictures or figures. + """ + + def __init__( + self, + linewidth: float, + bbox: Rect, + stroke: bool = False, + fill: bool = False, + evenodd: bool = False, + stroking_color: Optional[Color] = None, + non_stroking_color: Optional[Color] = None, + original_path: Optional[List[PathSegment]] = None, + dashing_style: Optional[Tuple[object, object]] = None, + ) -> None: + (x0, y0, x1, y1) = bbox + LTCurve.__init__( + self, + linewidth, + [(x0, y0), (x1, y0), (x1, y1), (x0, y1)], + stroke, + fill, + evenodd, + stroking_color, + non_stroking_color, + original_path, + dashing_style, + ) + + +class LTImage(LTComponent): + """An image object. + + Embedded images can be in JPEG, Bitmap or JBIG2. + """ + + def __init__(self, name: str, stream: PDFStream, bbox: Rect) -> None: + LTComponent.__init__(self, bbox) + self.name = name + self.stream = stream + self.srcsize = (stream.get_any(("W", "Width")), stream.get_any(("H", "Height"))) + self.imagemask = stream.get_any(("IM", "ImageMask")) + self.bits = stream.get_any(("BPC", "BitsPerComponent"), 1) + self.colorspace = stream.get_any(("CS", "ColorSpace")) + if not isinstance(self.colorspace, list): + self.colorspace = [self.colorspace] + + def __repr__(self) -> str: + return "<{}({}) {} {!r}>".format( + self.__class__.__name__, + self.name, + bbox2str(self.bbox), + self.srcsize, + ) + + +class LTAnno(LTItem, LTText): + """Actual letter in the text as a Unicode string. + + Note that, while a LTChar object has actual boundaries, LTAnno objects does + not, as these are "virtual" characters, inserted by a layout analyzer + according to the relationship between two characters (e.g. a space). + """ + + def __init__(self, text: str) -> None: + self._text = text + return + + def get_text(self) -> str: + return self._text + + +class LTChar(LTComponent, LTText): + """Actual letter in the text as a Unicode string.""" + + def __init__( + self, + matrix: Matrix, + font: PDFFont, + fontsize: float, + scaling: float, + rise: float, + text: str, + textwidth: float, + textdisp: Union[float, Tuple[Optional[float], float]], + ncs: PDFColorSpace, + graphicstate: PDFGraphicState, + ) -> None: + LTText.__init__(self) + self._text = text + self.matrix = matrix + self.fontname = font.fontname + self.ncs = ncs + self.graphicstate = graphicstate + self.adv = textwidth * fontsize * scaling + # compute the boundary rectangle. + if font.is_vertical(): + # vertical + assert isinstance(textdisp, tuple) + (vx, vy) = textdisp + if vx is None: + vx = fontsize * 0.5 + else: + vx = vx * fontsize * 0.001 + vy = (1000 - vy) * fontsize * 0.001 + bbox_lower_left = (-vx, vy + rise + self.adv) + bbox_upper_right = (-vx + fontsize, vy + rise) + else: + # horizontal + descent = font.get_descent() * fontsize + bbox_lower_left = (0, descent + rise) + bbox_upper_right = (self.adv, descent + rise + fontsize) + (a, b, c, d, e, f) = self.matrix + self.upright = 0 < a * d * scaling and b * c <= 0 + (x0, y0) = apply_matrix_pt(self.matrix, bbox_lower_left) + (x1, y1) = apply_matrix_pt(self.matrix, bbox_upper_right) + if x1 < x0: + (x0, x1) = (x1, x0) + if y1 < y0: + (y0, y1) = (y1, y0) + LTComponent.__init__(self, (x0, y0, x1, y1)) + if font.is_vertical(): + self.size = self.width + else: + self.size = self.height + return + + def __repr__(self) -> str: + return "<{} {} matrix={} font={!r} adv={} text={!r}>".format( + self.__class__.__name__, + bbox2str(self.bbox), + matrix2str(self.matrix), + self.fontname, + self.adv, + self.get_text(), + ) + + def get_text(self) -> str: + return self._text + + +LTItemT = TypeVar("LTItemT", bound=LTItem) + + +class LTContainer(LTComponent, Generic[LTItemT]): + """Object that can be extended and analyzed""" + + def __init__(self, bbox: Rect) -> None: + LTComponent.__init__(self, bbox) + self._objs: List[LTItemT] = [] + return + + def __iter__(self) -> Iterator[LTItemT]: + return iter(self._objs) + + def __len__(self) -> int: + return len(self._objs) + + def add(self, obj: LTItemT) -> None: + self._objs.append(obj) + return + + def extend(self, objs: Iterable[LTItemT]) -> None: + for obj in objs: + self.add(obj) + return + + def analyze(self, laparams: LAParams) -> None: + for obj in self._objs: + obj.analyze(laparams) + return + + +class LTExpandableContainer(LTContainer[LTItemT]): + def __init__(self) -> None: + LTContainer.__init__(self, (+INF, +INF, -INF, -INF)) + return + + # Incompatible override: we take an LTComponent (with bounding box), but + # super() LTContainer only considers LTItem (no bounding box). + def add(self, obj: LTComponent) -> None: # type: ignore[override] + LTContainer.add(self, cast(LTItemT, obj)) + self.set_bbox( + ( + min(self.x0, obj.x0), + min(self.y0, obj.y0), + max(self.x1, obj.x1), + max(self.y1, obj.y1), + ) + ) + return + + +class LTTextContainer(LTExpandableContainer[LTItemT], LTText): + def __init__(self) -> None: + LTText.__init__(self) + LTExpandableContainer.__init__(self) + return + + def get_text(self) -> str: + return "".join( + cast(LTText, obj).get_text() for obj in self if isinstance(obj, LTText) + ) + + +TextLineElement = Union[LTChar, LTAnno] + + +class LTTextLine(LTTextContainer[TextLineElement]): + """Contains a list of LTChar objects that represent a single text line. + + The characters are aligned either horizontally or vertically, depending on + the text's writing mode. + """ + + def __init__(self, word_margin: float) -> None: + super().__init__() + self.word_margin = word_margin + return + + def __repr__(self) -> str: + return "<{} {} {!r}>".format( + self.__class__.__name__, + bbox2str(self.bbox), + self.get_text(), + ) + + def analyze(self, laparams: LAParams) -> None: + for obj in self._objs: + obj.analyze(laparams) + LTContainer.add(self, LTAnno("\n")) + return + + def find_neighbors( + self, plane: Plane[LTComponentT], ratio: float + ) -> List["LTTextLine"]: + raise NotImplementedError + + def is_empty(self) -> bool: + return super().is_empty() or self.get_text().isspace() + + +class LTTextLineHorizontal(LTTextLine): + def __init__(self, word_margin: float) -> None: + LTTextLine.__init__(self, word_margin) + self._x1: float = +INF + return + + # Incompatible override: we take an LTComponent (with bounding box), but + # LTContainer only considers LTItem (no bounding box). + def add(self, obj: LTComponent) -> None: # type: ignore[override] + if isinstance(obj, LTChar) and self.word_margin: + margin = self.word_margin * max(obj.width, obj.height) + if self._x1 < obj.x0 - margin: + LTContainer.add(self, LTAnno(" ")) + self._x1 = obj.x1 + super().add(obj) + return + + def find_neighbors( + self, plane: Plane[LTComponentT], ratio: float + ) -> List[LTTextLine]: + """ + Finds neighboring LTTextLineHorizontals in the plane. + + Returns a list of other LTTestLineHorizontals in the plane which are + close to self. "Close" can be controlled by ratio. The returned objects + will be the same height as self, and also either left-, right-, or + centrally-aligned. + """ + d = ratio * self.height + objs = plane.find((self.x0, self.y0 - d, self.x1, self.y1 + d)) + return [ + obj + for obj in objs + if ( + isinstance(obj, LTTextLineHorizontal) + and self._is_same_height_as(obj, tolerance=d) + and ( + self._is_left_aligned_with(obj, tolerance=d) + or self._is_right_aligned_with(obj, tolerance=d) + or self._is_centrally_aligned_with(obj, tolerance=d) + ) + ) + ] + + def _is_left_aligned_with(self, other: LTComponent, tolerance: float = 0) -> bool: + """ + Whether the left-hand edge of `other` is within `tolerance`. + """ + return abs(other.x0 - self.x0) <= tolerance + + def _is_right_aligned_with(self, other: LTComponent, tolerance: float = 0) -> bool: + """ + Whether the right-hand edge of `other` is within `tolerance`. + """ + return abs(other.x1 - self.x1) <= tolerance + + def _is_centrally_aligned_with( + self, other: LTComponent, tolerance: float = 0 + ) -> bool: + """ + Whether the horizontal center of `other` is within `tolerance`. + """ + return abs((other.x0 + other.x1) / 2 - (self.x0 + self.x1) / 2) <= tolerance + + def _is_same_height_as(self, other: LTComponent, tolerance: float = 0) -> bool: + return abs(other.height - self.height) <= tolerance + + +class LTTextLineVertical(LTTextLine): + def __init__(self, word_margin: float) -> None: + LTTextLine.__init__(self, word_margin) + self._y0: float = -INF + return + + # Incompatible override: we take an LTComponent (with bounding box), but + # LTContainer only considers LTItem (no bounding box). + def add(self, obj: LTComponent) -> None: # type: ignore[override] + if isinstance(obj, LTChar) and self.word_margin: + margin = self.word_margin * max(obj.width, obj.height) + if obj.y1 + margin < self._y0: + LTContainer.add(self, LTAnno(" ")) + self._y0 = obj.y0 + super().add(obj) + return + + def find_neighbors( + self, plane: Plane[LTComponentT], ratio: float + ) -> List[LTTextLine]: + """ + Finds neighboring LTTextLineVerticals in the plane. + + Returns a list of other LTTextLineVerticals in the plane which are + close to self. "Close" can be controlled by ratio. The returned objects + will be the same width as self, and also either upper-, lower-, or + centrally-aligned. + """ + d = ratio * self.width + objs = plane.find((self.x0 - d, self.y0, self.x1 + d, self.y1)) + return [ + obj + for obj in objs + if ( + isinstance(obj, LTTextLineVertical) + and self._is_same_width_as(obj, tolerance=d) + and ( + self._is_lower_aligned_with(obj, tolerance=d) + or self._is_upper_aligned_with(obj, tolerance=d) + or self._is_centrally_aligned_with(obj, tolerance=d) + ) + ) + ] + + def _is_lower_aligned_with(self, other: LTComponent, tolerance: float = 0) -> bool: + """ + Whether the lower edge of `other` is within `tolerance`. + """ + return abs(other.y0 - self.y0) <= tolerance + + def _is_upper_aligned_with(self, other: LTComponent, tolerance: float = 0) -> bool: + """ + Whether the upper edge of `other` is within `tolerance`. + """ + return abs(other.y1 - self.y1) <= tolerance + + def _is_centrally_aligned_with( + self, other: LTComponent, tolerance: float = 0 + ) -> bool: + """ + Whether the vertical center of `other` is within `tolerance`. + """ + return abs((other.y0 + other.y1) / 2 - (self.y0 + self.y1) / 2) <= tolerance + + def _is_same_width_as(self, other: LTComponent, tolerance: float) -> bool: + return abs(other.width - self.width) <= tolerance + + +class LTTextBox(LTTextContainer[LTTextLine]): + """Represents a group of text chunks in a rectangular area. + + Note that this box is created by geometric analysis and does not + necessarily represents a logical boundary of the text. It contains a list + of LTTextLine objects. + """ + + def __init__(self) -> None: + LTTextContainer.__init__(self) + self.index: int = -1 + return + + def __repr__(self) -> str: + return "<{}({}) {} {!r}>".format( + self.__class__.__name__, + self.index, + bbox2str(self.bbox), + self.get_text(), + ) + + def get_writing_mode(self) -> str: + raise NotImplementedError + + +class LTTextBoxHorizontal(LTTextBox): + def analyze(self, laparams: LAParams) -> None: + super().analyze(laparams) + self._objs.sort(key=lambda obj: -obj.y1) + return + + def get_writing_mode(self) -> str: + return "lr-tb" + + +class LTTextBoxVertical(LTTextBox): + def analyze(self, laparams: LAParams) -> None: + super().analyze(laparams) + self._objs.sort(key=lambda obj: -obj.x1) + return + + def get_writing_mode(self) -> str: + return "tb-rl" + + +TextGroupElement = Union[LTTextBox, "LTTextGroup"] + + +class LTTextGroup(LTTextContainer[TextGroupElement]): + def __init__(self, objs: Iterable[TextGroupElement]) -> None: + super().__init__() + self.extend(objs) + return + + +class LTTextGroupLRTB(LTTextGroup): + def analyze(self, laparams: LAParams) -> None: + super().analyze(laparams) + assert laparams.boxes_flow is not None + boxes_flow = laparams.boxes_flow + # reorder the objects from top-left to bottom-right. + self._objs.sort( + key=lambda obj: (1 - boxes_flow) * obj.x0 + - (1 + boxes_flow) * (obj.y0 + obj.y1) + ) + return + + +class LTTextGroupTBRL(LTTextGroup): + def analyze(self, laparams: LAParams) -> None: + super().analyze(laparams) + assert laparams.boxes_flow is not None + boxes_flow = laparams.boxes_flow + # reorder the objects from top-right to bottom-left. + self._objs.sort( + key=lambda obj: -(1 + boxes_flow) * (obj.x0 + obj.x1) + - (1 - boxes_flow) * obj.y1 + ) + return + + +class LTLayoutContainer(LTContainer[LTComponent]): + def __init__(self, bbox: Rect) -> None: + LTContainer.__init__(self, bbox) + self.groups: Optional[List[LTTextGroup]] = None + return + + # group_objects: group text object to textlines. + def group_objects( + self, laparams: LAParams, objs: Iterable[LTComponent] + ) -> Iterator[LTTextLine]: + obj0 = None + line = None + for obj1 in objs: + if obj0 is not None: + # halign: obj0 and obj1 is horizontally aligned. + # + # +------+ - - - + # | obj0 | - - +------+ - + # | | | obj1 | | (line_overlap) + # +------+ - - | | - + # - - - +------+ + # + # |<--->| + # (char_margin) + halign = ( + obj0.is_voverlap(obj1) + and min(obj0.height, obj1.height) * laparams.line_overlap + < obj0.voverlap(obj1) + and obj0.hdistance(obj1) + < max(obj0.width, obj1.width) * laparams.char_margin + ) + + # valign: obj0 and obj1 is vertically aligned. + # + # +------+ + # | obj0 | + # | | + # +------+ - - - + # | | | (char_margin) + # +------+ - - + # | obj1 | + # | | + # +------+ + # + # |<-->| + # (line_overlap) + valign = ( + laparams.detect_vertical + and obj0.is_hoverlap(obj1) + and min(obj0.width, obj1.width) * laparams.line_overlap + < obj0.hoverlap(obj1) + and obj0.vdistance(obj1) + < max(obj0.height, obj1.height) * laparams.char_margin + ) + + if (halign and isinstance(line, LTTextLineHorizontal)) or ( + valign and isinstance(line, LTTextLineVertical) + ): + + line.add(obj1) + elif line is not None: + yield line + line = None + else: + if valign and not halign: + line = LTTextLineVertical(laparams.word_margin) + line.add(obj0) + line.add(obj1) + elif halign and not valign: + line = LTTextLineHorizontal(laparams.word_margin) + line.add(obj0) + line.add(obj1) + else: + line = LTTextLineHorizontal(laparams.word_margin) + line.add(obj0) + yield line + line = None + obj0 = obj1 + if line is None: + line = LTTextLineHorizontal(laparams.word_margin) + assert obj0 is not None + line.add(obj0) + yield line + return + + def group_textlines( + self, laparams: LAParams, lines: Iterable[LTTextLine] + ) -> Iterator[LTTextBox]: + """Group neighboring lines to textboxes""" + plane: Plane[LTTextLine] = Plane(self.bbox) + plane.extend(lines) + boxes: Dict[LTTextLine, LTTextBox] = {} + for line in lines: + neighbors = line.find_neighbors(plane, laparams.line_margin) + members = [line] + for obj1 in neighbors: + members.append(obj1) + if obj1 in boxes: + members.extend(boxes.pop(obj1)) + if isinstance(line, LTTextLineHorizontal): + box: LTTextBox = LTTextBoxHorizontal() + else: + box = LTTextBoxVertical() + for obj in uniq(members): + box.add(obj) + boxes[obj] = box + done = set() + for line in lines: + if line not in boxes: + continue + box = boxes[line] + if box in done: + continue + done.add(box) + if not box.is_empty(): + yield box + return + + def group_textboxes( + self, laparams: LAParams, boxes: Sequence[LTTextBox] + ) -> List[LTTextGroup]: + """Group textboxes hierarchically. + + Get pair-wise distances, via dist func defined below, and then merge + from the closest textbox pair. Once obj1 and obj2 are merged / + grouped, the resulting group is considered as a new object, and its + distances to other objects & groups are added to the process queue. + + For performance reason, pair-wise distances and object pair info are + maintained in a heap of (idx, dist, id(obj1), id(obj2), obj1, obj2) + tuples. It ensures quick access to the smallest element. Note that + since comparison operators, e.g., __lt__, are disabled for + LTComponent, id(obj) has to appear before obj in element tuples. + + :param laparams: LAParams object. + :param boxes: All textbox objects to be grouped. + :return: a list that has only one element, the final top level group. + """ + + ElementT = Union[LTTextBox, LTTextGroup] + plane: Plane[ElementT] = Plane(self.bbox) + + def dist(obj1: LTComponent, obj2: LTComponent) -> float: + """A distance function between two TextBoxes. + + Consider the bounding rectangle for obj1 and obj2. + Return its area less the areas of obj1 and obj2, + shown as 'www' below. This value may be negative. + +------+..........+ (x1, y1) + | obj1 |wwwwwwwwww: + +------+www+------+ + :wwwwwwwwww| obj2 | + (x0, y0) +..........+------+ + """ + x0 = min(obj1.x0, obj2.x0) + y0 = min(obj1.y0, obj2.y0) + x1 = max(obj1.x1, obj2.x1) + y1 = max(obj1.y1, obj2.y1) + return ( + (x1 - x0) * (y1 - y0) + - obj1.width * obj1.height + - obj2.width * obj2.height + ) + + def isany(obj1: ElementT, obj2: ElementT) -> Set[ElementT]: + """Check if there's any other object between obj1 and obj2.""" + x0 = min(obj1.x0, obj2.x0) + y0 = min(obj1.y0, obj2.y0) + x1 = max(obj1.x1, obj2.x1) + y1 = max(obj1.y1, obj2.y1) + objs = set(plane.find((x0, y0, x1, y1))) + return objs.difference((obj1, obj2)) + + dists: List[Tuple[bool, float, int, int, ElementT, ElementT]] = [] + for i in range(len(boxes)): + box1 = boxes[i] + for j in range(i + 1, len(boxes)): + box2 = boxes[j] + dists.append((False, dist(box1, box2), id(box1), id(box2), box1, box2)) + heapq.heapify(dists) + + plane.extend(boxes) + done = set() + while len(dists) > 0: + (skip_isany, d, id1, id2, obj1, obj2) = heapq.heappop(dists) + # Skip objects that are already merged + if (id1 not in done) and (id2 not in done): + if not skip_isany and isany(obj1, obj2): + heapq.heappush(dists, (True, d, id1, id2, obj1, obj2)) + continue + if isinstance(obj1, (LTTextBoxVertical, LTTextGroupTBRL)) or isinstance( + obj2, (LTTextBoxVertical, LTTextGroupTBRL) + ): + group: LTTextGroup = LTTextGroupTBRL([obj1, obj2]) + else: + group = LTTextGroupLRTB([obj1, obj2]) + plane.remove(obj1) + plane.remove(obj2) + done.update([id1, id2]) + + for other in plane: + heapq.heappush( + dists, + (False, dist(group, other), id(group), id(other), group, other), + ) + plane.add(group) + # By now only groups are in the plane + return list(cast(LTTextGroup, g) for g in plane) + + def analyze(self, laparams: LAParams) -> None: + # textobjs is a list of LTChar objects, i.e. + # it has all the individual characters in the page. + (textobjs, otherobjs) = fsplit(lambda obj: isinstance(obj, LTChar), self) + for obj in otherobjs: + obj.analyze(laparams) + if not textobjs: + return + textlines = list(self.group_objects(laparams, textobjs)) + (empties, textlines) = fsplit(lambda obj: obj.is_empty(), textlines) + for obj in empties: + obj.analyze(laparams) + textboxes = list(self.group_textlines(laparams, textlines)) + if laparams.boxes_flow is None: + for textbox in textboxes: + textbox.analyze(laparams) + + def getkey(box: LTTextBox) -> Tuple[int, float, float]: + if isinstance(box, LTTextBoxVertical): + return (0, -box.x1, -box.y0) + else: + return (1, -box.y0, box.x0) + + textboxes.sort(key=getkey) + else: + self.groups = self.group_textboxes(laparams, textboxes) + assigner = IndexAssigner() + for group in self.groups: + group.analyze(laparams) + assigner.run(group) + textboxes.sort(key=lambda box: box.index) + self._objs = ( + cast(List[LTComponent], textboxes) + + otherobjs + + cast(List[LTComponent], empties) + ) + return + + +class LTFigure(LTLayoutContainer): + """Represents an area used by PDF Form objects. + + PDF Forms can be used to present figures or pictures by embedding yet + another PDF document within a page. Note that LTFigure objects can appear + recursively. + """ + + def __init__(self, name: str, bbox: Rect, matrix: Matrix) -> None: + self.name = name + self.matrix = matrix + (x, y, w, h) = bbox + bounds = ((x, y), (x + w, y), (x, y + h), (x + w, y + h)) + bbox = get_bound(apply_matrix_pt(matrix, (p, q)) for (p, q) in bounds) + LTLayoutContainer.__init__(self, bbox) + return + + def __repr__(self) -> str: + return "<{}({}) {} matrix={}>".format( + self.__class__.__name__, + self.name, + bbox2str(self.bbox), + matrix2str(self.matrix), + ) + + def analyze(self, laparams: LAParams) -> None: + if not laparams.all_texts: + return + LTLayoutContainer.analyze(self, laparams) + return + + +class LTPage(LTLayoutContainer): + """Represents an entire page. + + Like any other LTLayoutContainer, an LTPage can be iterated to obtain child + objects like LTTextBox, LTFigure, LTImage, LTRect, LTCurve and LTLine. + """ + + def __init__(self, pageid: int, bbox: Rect, rotate: float = 0) -> None: + LTLayoutContainer.__init__(self, bbox) + self.pageid = pageid + self.rotate = rotate + return + + def __repr__(self) -> str: + return "<{}({!r}) {} rotate={!r}>".format( + self.__class__.__name__, + self.pageid, + bbox2str(self.bbox), + self.rotate, + ) diff --git a/templates/skills/file_manager/dependencies/pdfminer/lzw.py b/templates/skills/file_manager/dependencies/pdfminer/lzw.py new file mode 100644 index 00000000..30e303fc --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/lzw.py @@ -0,0 +1,105 @@ +import logging +from io import BytesIO +from typing import BinaryIO, Iterator, List, Optional, cast + +from .pdfexceptions import PDFException, PDFEOFError + +logger = logging.getLogger(__name__) + + +class CorruptDataError(PDFException): + pass + + +class LZWDecoder: + def __init__(self, fp: BinaryIO) -> None: + self.fp = fp + self.buff = 0 + self.bpos = 8 + self.nbits = 9 + # NB: self.table stores None only in indices 256 and 257 + self.table: List[Optional[bytes]] = [] + self.prevbuf: Optional[bytes] = None + + def readbits(self, bits: int) -> int: + v = 0 + while 1: + # the number of remaining bits we can get from the current buffer. + r = 8 - self.bpos + if bits <= r: + # |-----8-bits-----| + # |-bpos-|-bits-| | + # | |----r----| + v = (v << bits) | ((self.buff >> (r - bits)) & ((1 << bits) - 1)) + self.bpos += bits + break + else: + # |-----8-bits-----| + # |-bpos-|---bits----... + # | |----r----| + v = (v << r) | (self.buff & ((1 << r) - 1)) + bits -= r + x = self.fp.read(1) + if not x: + raise PDFEOFError + self.buff = ord(x) + self.bpos = 0 + return v + + def feed(self, code: int) -> bytes: + x = b"" + if code == 256: + self.table = [bytes((c,)) for c in range(256)] # 0-255 + self.table.append(None) # 256 + self.table.append(None) # 257 + self.prevbuf = b"" + self.nbits = 9 + elif code == 257: + pass + elif not self.prevbuf: + x = self.prevbuf = cast(bytes, self.table[code]) # assume not None + else: + if code < len(self.table): + x = cast(bytes, self.table[code]) # assume not None + self.table.append(self.prevbuf + x[:1]) + elif code == len(self.table): + self.table.append(self.prevbuf + self.prevbuf[:1]) + x = cast(bytes, self.table[code]) + else: + raise CorruptDataError + table_length = len(self.table) + if table_length == 511: + self.nbits = 10 + elif table_length == 1023: + self.nbits = 11 + elif table_length == 2047: + self.nbits = 12 + self.prevbuf = x + return x + + def run(self) -> Iterator[bytes]: + while 1: + try: + code = self.readbits(self.nbits) + except EOFError: + break + try: + x = self.feed(code) + except CorruptDataError: + # just ignore corrupt data and stop yielding there + break + yield x + + logger.debug( + "nbits=%d, code=%d, output=%r, table=%r", + self.nbits, + code, + x, + self.table[258:], + ) + + +def lzwdecode(data: bytes) -> bytes: + fp = BytesIO(data) + s = LZWDecoder(fp).run() + return b"".join(s) diff --git a/templates/skills/file_manager/dependencies/pdfminer/pdfcolor.py b/templates/skills/file_manager/dependencies/pdfminer/pdfcolor.py new file mode 100644 index 00000000..81319e38 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/pdfcolor.py @@ -0,0 +1,33 @@ +import collections +from typing import Dict + +from .psparser import LIT + +LITERAL_DEVICE_GRAY = LIT("DeviceGray") +LITERAL_DEVICE_RGB = LIT("DeviceRGB") +LITERAL_DEVICE_CMYK = LIT("DeviceCMYK") + + +class PDFColorSpace: + def __init__(self, name: str, ncomponents: int) -> None: + self.name = name + self.ncomponents = ncomponents + + def __repr__(self) -> str: + return "" % (self.name, self.ncomponents) + + +PREDEFINED_COLORSPACE: Dict[str, PDFColorSpace] = collections.OrderedDict() + +for (name, n) in [ + ("DeviceGray", 1), # default value first + ("CalRGB", 3), + ("CalGray", 1), + ("Lab", 3), + ("DeviceRGB", 3), + ("DeviceCMYK", 4), + ("Separation", 1), + ("Indexed", 1), + ("Pattern", 1), +]: + PREDEFINED_COLORSPACE[name] = PDFColorSpace(name, n) diff --git a/templates/skills/file_manager/dependencies/pdfminer/pdfdevice.py b/templates/skills/file_manager/dependencies/pdfminer/pdfdevice.py new file mode 100644 index 00000000..a3564909 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/pdfdevice.py @@ -0,0 +1,317 @@ +from typing import ( + BinaryIO, + Iterable, + List, + Optional, + Sequence, + TYPE_CHECKING, + Union, + cast, +) + +from pdfminer.psparser import PSLiteral +from . import utils +from .pdfcolor import PDFColorSpace +from .pdffont import PDFFont +from .pdffont import PDFUnicodeNotDefined +from .pdfpage import PDFPage +from .pdftypes import PDFStream +from .utils import Matrix, Point, Rect, PathSegment + +if TYPE_CHECKING: + from .pdfinterp import PDFGraphicState + from .pdfinterp import PDFResourceManager + from .pdfinterp import PDFTextState + from .pdfinterp import PDFStackT + + +PDFTextSeq = Iterable[Union[int, float, bytes]] + + +class PDFDevice: + """Translate the output of PDFPageInterpreter to the output that is needed""" + + def __init__(self, rsrcmgr: "PDFResourceManager") -> None: + self.rsrcmgr = rsrcmgr + self.ctm: Optional[Matrix] = None + + def __repr__(self) -> str: + return "" + + def __enter__(self) -> "PDFDevice": + return self + + def __exit__(self, exc_type: object, exc_val: object, exc_tb: object) -> None: + self.close() + + def close(self) -> None: + pass + + def set_ctm(self, ctm: Matrix) -> None: + self.ctm = ctm + + def begin_tag(self, tag: PSLiteral, props: Optional["PDFStackT"] = None) -> None: + pass + + def end_tag(self) -> None: + pass + + def do_tag(self, tag: PSLiteral, props: Optional["PDFStackT"] = None) -> None: + pass + + def begin_page(self, page: PDFPage, ctm: Matrix) -> None: + pass + + def end_page(self, page: PDFPage) -> None: + pass + + def begin_figure(self, name: str, bbox: Rect, matrix: Matrix) -> None: + pass + + def end_figure(self, name: str) -> None: + pass + + def paint_path( + self, + graphicstate: "PDFGraphicState", + stroke: bool, + fill: bool, + evenodd: bool, + path: Sequence[PathSegment], + ) -> None: + pass + + def render_image(self, name: str, stream: PDFStream) -> None: + pass + + def render_string( + self, + textstate: "PDFTextState", + seq: PDFTextSeq, + ncs: PDFColorSpace, + graphicstate: "PDFGraphicState", + ) -> None: + pass + + +class PDFTextDevice(PDFDevice): + def render_string( + self, + textstate: "PDFTextState", + seq: PDFTextSeq, + ncs: PDFColorSpace, + graphicstate: "PDFGraphicState", + ) -> None: + assert self.ctm is not None + matrix = utils.mult_matrix(textstate.matrix, self.ctm) + font = textstate.font + fontsize = textstate.fontsize + scaling = textstate.scaling * 0.01 + charspace = textstate.charspace * scaling + wordspace = textstate.wordspace * scaling + rise = textstate.rise + assert font is not None + if font.is_multibyte(): + wordspace = 0 + dxscale = 0.001 * fontsize * scaling + if font.is_vertical(): + textstate.linematrix = self.render_string_vertical( + seq, + matrix, + textstate.linematrix, + font, + fontsize, + scaling, + charspace, + wordspace, + rise, + dxscale, + ncs, + graphicstate, + ) + else: + textstate.linematrix = self.render_string_horizontal( + seq, + matrix, + textstate.linematrix, + font, + fontsize, + scaling, + charspace, + wordspace, + rise, + dxscale, + ncs, + graphicstate, + ) + + def render_string_horizontal( + self, + seq: PDFTextSeq, + matrix: Matrix, + pos: Point, + font: PDFFont, + fontsize: float, + scaling: float, + charspace: float, + wordspace: float, + rise: float, + dxscale: float, + ncs: PDFColorSpace, + graphicstate: "PDFGraphicState", + ) -> Point: + (x, y) = pos + needcharspace = False + for obj in seq: + if isinstance(obj, (int, float)): + x -= obj * dxscale + needcharspace = True + else: + for cid in font.decode(obj): + if needcharspace: + x += charspace + x += self.render_char( + utils.translate_matrix(matrix, (x, y)), + font, + fontsize, + scaling, + rise, + cid, + ncs, + graphicstate, + ) + if cid == 32 and wordspace: + x += wordspace + needcharspace = True + return (x, y) + + def render_string_vertical( + self, + seq: PDFTextSeq, + matrix: Matrix, + pos: Point, + font: PDFFont, + fontsize: float, + scaling: float, + charspace: float, + wordspace: float, + rise: float, + dxscale: float, + ncs: PDFColorSpace, + graphicstate: "PDFGraphicState", + ) -> Point: + (x, y) = pos + needcharspace = False + for obj in seq: + if isinstance(obj, (int, float)): + y -= obj * dxscale + needcharspace = True + else: + for cid in font.decode(obj): + if needcharspace: + y += charspace + y += self.render_char( + utils.translate_matrix(matrix, (x, y)), + font, + fontsize, + scaling, + rise, + cid, + ncs, + graphicstate, + ) + if cid == 32 and wordspace: + y += wordspace + needcharspace = True + return (x, y) + + def render_char( + self, + matrix: Matrix, + font: PDFFont, + fontsize: float, + scaling: float, + rise: float, + cid: int, + ncs: PDFColorSpace, + graphicstate: "PDFGraphicState", + ) -> float: + return 0 + + +class TagExtractor(PDFDevice): + def __init__( + self, rsrcmgr: "PDFResourceManager", outfp: BinaryIO, codec: str = "utf-8" + ) -> None: + PDFDevice.__init__(self, rsrcmgr) + self.outfp = outfp + self.codec = codec + self.pageno = 0 + self._stack: List[PSLiteral] = [] + + def render_string( + self, + textstate: "PDFTextState", + seq: PDFTextSeq, + ncs: PDFColorSpace, + graphicstate: "PDFGraphicState", + ) -> None: + font = textstate.font + assert font is not None + text = "" + for obj in seq: + if isinstance(obj, str): + obj = utils.make_compat_bytes(obj) + if not isinstance(obj, bytes): + continue + chars = font.decode(obj) + for cid in chars: + try: + char = font.to_unichr(cid) + text += char + except PDFUnicodeNotDefined: + pass + self._write(utils.enc(text)) + + def begin_page(self, page: PDFPage, ctm: Matrix) -> None: + output = '' % ( + self.pageno, + utils.bbox2str(page.mediabox), + page.rotate, + ) + self._write(output) + return + + def end_page(self, page: PDFPage) -> None: + self._write("\n") + self.pageno += 1 + return + + def begin_tag(self, tag: PSLiteral, props: Optional["PDFStackT"] = None) -> None: + s = "" + if isinstance(props, dict): + s = "".join( + [ + f' {utils.enc(k)}="{utils.make_compat_str(v)}"' + for (k, v) in sorted(props.items()) + ] + ) + out_s = f"<{utils.enc(cast(str, tag.name))}{s}>" + self._write(out_s) + self._stack.append(tag) + return + + def end_tag(self) -> None: + assert self._stack, str(self.pageno) + tag = self._stack.pop(-1) + out_s = "" % utils.enc(cast(str, tag.name)) + self._write(out_s) + return + + def do_tag(self, tag: PSLiteral, props: Optional["PDFStackT"] = None) -> None: + self.begin_tag(tag, props) + self._stack.pop(-1) + return + + def _write(self, s: str) -> None: + self.outfp.write(s.encode(self.codec)) diff --git a/templates/skills/file_manager/dependencies/pdfminer/pdfdocument.py b/templates/skills/file_manager/dependencies/pdfminer/pdfdocument.py new file mode 100644 index 00000000..6898759f --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/pdfdocument.py @@ -0,0 +1,1070 @@ +import itertools +import logging +import re +import struct +from hashlib import sha256, md5, sha384, sha512 +from typing import ( + Any, + Callable, + Dict, + Iterable, + Iterator, + KeysView, + List, + Optional, + Sequence, + Tuple, + Type, + Union, + cast, +) + +from cryptography.hazmat.backends import default_backend +from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes + +from . import settings +from .arcfour import Arcfour +from .data_structures import NumberTree +from .pdfparser import PDFSyntaxError, PDFParser, PDFStreamParser +from .pdftypes import ( + DecipherCallable, + PDFStream, + decipher_all, + int_value, + str_value, + list_value, + uint_value, + dict_value, + stream_value, +) +from .pdfexceptions import PDFException, PDFTypeError, PDFObjectNotFound, PDFKeyError +from .psparser import literal_name, LIT, KWD +from .psexceptions import PSEOF +from .utils import choplist, decode_text, nunpack, format_int_roman, format_int_alpha + +log = logging.getLogger(__name__) + + +class PDFNoValidXRef(PDFSyntaxError): + pass + + +class PDFNoValidXRefWarning(SyntaxWarning): + """Legacy warning for missing xref. + + Not used anymore because warnings.warn is replaced by logger.Logger.warn. + """ + + pass + + +class PDFNoOutlines(PDFException): + pass + + +class PDFNoPageLabels(PDFException): + pass + + +class PDFDestinationNotFound(PDFException): + pass + + +class PDFEncryptionError(PDFException): + pass + + +class PDFPasswordIncorrect(PDFEncryptionError): + pass + + +class PDFEncryptionWarning(UserWarning): + """Legacy warning for failed decryption. + + Not used anymore because warnings.warn is replaced by logger.Logger.warn. + """ + + pass + + +class PDFTextExtractionNotAllowedWarning(UserWarning): + """Legacy warning for PDF that does not allow extraction. + + Not used anymore because warnings.warn is replaced by logger.Logger.warn. + """ + + pass + + +class PDFTextExtractionNotAllowed(PDFEncryptionError): + pass + + +class PDFTextExtractionNotAllowedError(PDFTextExtractionNotAllowed): + def __init__(self, *args: object) -> None: + from warnings import warn + + warn( + "PDFTextExtractionNotAllowedError will be removed in the future. " + "Use PDFTextExtractionNotAllowed instead.", + DeprecationWarning, + ) + super().__init__(*args) + + +# some predefined literals and keywords. +LITERAL_OBJSTM = LIT("ObjStm") +LITERAL_XREF = LIT("XRef") +LITERAL_CATALOG = LIT("Catalog") + + +class PDFBaseXRef: + def get_trailer(self) -> Dict[str, Any]: + raise NotImplementedError + + def get_objids(self) -> Iterable[int]: + return [] + + # Must return + # (strmid, index, genno) + # or (None, pos, genno) + def get_pos(self, objid: int) -> Tuple[Optional[int], int, int]: + raise PDFKeyError(objid) + + def load(self, parser: PDFParser) -> None: + raise NotImplementedError + + +class PDFXRef(PDFBaseXRef): + def __init__(self) -> None: + self.offsets: Dict[int, Tuple[Optional[int], int, int]] = {} + self.trailer: Dict[str, Any] = {} + + def __repr__(self) -> str: + return "" % (self.offsets.keys()) + + def load(self, parser: PDFParser) -> None: + while True: + try: + (pos, line) = parser.nextline() + line = line.strip() + if not line: + continue + except PSEOF: + raise PDFNoValidXRef("Unexpected EOF - file corrupted?") + if line.startswith(b"trailer"): + parser.seek(pos) + break + f = line.split(b" ") + if len(f) != 2: + error_msg = f"Trailer not found: {parser!r}: line={line!r}" + raise PDFNoValidXRef(error_msg) + try: + (start, nobjs) = map(int, f) + except ValueError: + error_msg = f"Invalid line: {parser!r}: line={line!r}" + raise PDFNoValidXRef(error_msg) + for objid in range(start, start + nobjs): + try: + (_, line) = parser.nextline() + line = line.strip() + except PSEOF: + raise PDFNoValidXRef("Unexpected EOF - file corrupted?") + f = line.split(b" ") + if len(f) != 3: + error_msg = "Invalid XRef format: {!r}, line={!r}".format( + parser, line + ) + raise PDFNoValidXRef(error_msg) + (pos_b, genno_b, use_b) = f + if use_b != b"n": + continue + self.offsets[objid] = (None, int(pos_b), int(genno_b)) + log.debug("xref objects: %r", self.offsets) + self.load_trailer(parser) + + def load_trailer(self, parser: PDFParser) -> None: + try: + (_, kwd) = parser.nexttoken() + assert kwd is KWD(b"trailer"), str(kwd) + (_, dic) = parser.nextobject() + except PSEOF: + x = parser.pop(1) + if not x: + raise PDFNoValidXRef("Unexpected EOF - file corrupted") + (_, dic) = x[0] + self.trailer.update(dict_value(dic)) + log.debug("trailer=%r", self.trailer) + + def get_trailer(self) -> Dict[str, Any]: + return self.trailer + + def get_objids(self) -> KeysView[int]: + return self.offsets.keys() + + def get_pos(self, objid: int) -> Tuple[Optional[int], int, int]: + return self.offsets[objid] + + +class PDFXRefFallback(PDFXRef): + def __repr__(self) -> str: + return "" % (self.offsets.keys()) + + PDFOBJ_CUE = re.compile(r"^(\d+)\s+(\d+)\s+obj\b") + + def load(self, parser: PDFParser) -> None: + parser.seek(0) + while 1: + try: + (pos, line_bytes) = parser.nextline() + except PSEOF: + break + if line_bytes.startswith(b"trailer"): + parser.seek(pos) + self.load_trailer(parser) + log.debug("trailer: %r", self.trailer) + break + line = line_bytes.decode("latin-1") # default pdf encoding + m = self.PDFOBJ_CUE.match(line) + if not m: + continue + (objid_s, genno_s) = m.groups() + objid = int(objid_s) + genno = int(genno_s) + self.offsets[objid] = (None, pos, genno) + # expand ObjStm. + parser.seek(pos) + (_, obj) = parser.nextobject() + if isinstance(obj, PDFStream) and obj.get("Type") is LITERAL_OBJSTM: + stream = stream_value(obj) + try: + n = stream["N"] + except KeyError: + if settings.STRICT: + raise PDFSyntaxError("N is not defined: %r" % stream) + n = 0 + parser1 = PDFStreamParser(stream.get_data()) + objs: List[int] = [] + try: + while 1: + (_, obj) = parser1.nextobject() + objs.append(cast(int, obj)) + except PSEOF: + pass + n = min(n, len(objs) // 2) + for index in range(n): + objid1 = objs[index * 2] + self.offsets[objid1] = (objid, index, 0) + + +class PDFXRefStream(PDFBaseXRef): + def __init__(self) -> None: + self.data: Optional[bytes] = None + self.entlen: Optional[int] = None + self.fl1: Optional[int] = None + self.fl2: Optional[int] = None + self.fl3: Optional[int] = None + self.ranges: List[Tuple[int, int]] = [] + + def __repr__(self) -> str: + return "" % (self.ranges) + + def load(self, parser: PDFParser) -> None: + (_, objid) = parser.nexttoken() # ignored + (_, genno) = parser.nexttoken() # ignored + (_, kwd) = parser.nexttoken() + (_, stream) = parser.nextobject() + if not isinstance(stream, PDFStream) or stream.get("Type") is not LITERAL_XREF: + raise PDFNoValidXRef("Invalid PDF stream spec.") + size = stream["Size"] + index_array = stream.get("Index", (0, size)) + if len(index_array) % 2 != 0: + raise PDFSyntaxError("Invalid index number") + self.ranges.extend(cast(Iterator[Tuple[int, int]], choplist(2, index_array))) + (self.fl1, self.fl2, self.fl3) = stream["W"] + assert self.fl1 is not None and self.fl2 is not None and self.fl3 is not None + self.data = stream.get_data() + self.entlen = self.fl1 + self.fl2 + self.fl3 + self.trailer = stream.attrs + log.debug( + "xref stream: objid=%s, fields=%d,%d,%d", + ", ".join(map(repr, self.ranges)), + self.fl1, + self.fl2, + self.fl3, + ) + return + + def get_trailer(self) -> Dict[str, Any]: + return self.trailer + + def get_objids(self) -> Iterator[int]: + for (start, nobjs) in self.ranges: + for i in range(nobjs): + assert self.entlen is not None + assert self.data is not None + offset = self.entlen * i + ent = self.data[offset : offset + self.entlen] + f1 = nunpack(ent[: self.fl1], 1) + if f1 == 1 or f1 == 2: + yield start + i + return + + def get_pos(self, objid: int) -> Tuple[Optional[int], int, int]: + index = 0 + for (start, nobjs) in self.ranges: + if start <= objid and objid < start + nobjs: + index += objid - start + break + else: + index += nobjs + else: + raise PDFKeyError(objid) + assert self.entlen is not None + assert self.data is not None + assert self.fl1 is not None and self.fl2 is not None and self.fl3 is not None + offset = self.entlen * index + ent = self.data[offset : offset + self.entlen] + f1 = nunpack(ent[: self.fl1], 1) + f2 = nunpack(ent[self.fl1 : self.fl1 + self.fl2]) + f3 = nunpack(ent[self.fl1 + self.fl2 :]) + if f1 == 1: + return (None, f2, f3) + elif f1 == 2: + return (f2, f3, 0) + else: + # this is a free object + raise PDFKeyError(objid) + + +class PDFStandardSecurityHandler: + + PASSWORD_PADDING = ( + b"(\xbfN^Nu\x8aAd\x00NV\xff\xfa\x01\x08" + b"..\x00\xb6\xd0h>\x80/\x0c\xa9\xfedSiz" + ) + supported_revisions: Tuple[int, ...] = (2, 3) + + def __init__( + self, docid: Sequence[bytes], param: Dict[str, Any], password: str = "" + ) -> None: + self.docid = docid + self.param = param + self.password = password + self.init() + return + + def init(self) -> None: + self.init_params() + if self.r not in self.supported_revisions: + error_msg = "Unsupported revision: param=%r" % self.param + raise PDFEncryptionError(error_msg) + self.init_key() + return + + def init_params(self) -> None: + self.v = int_value(self.param.get("V", 0)) + self.r = int_value(self.param["R"]) + self.p = uint_value(self.param["P"], 32) + self.o = str_value(self.param["O"]) + self.u = str_value(self.param["U"]) + self.length = int_value(self.param.get("Length", 40)) + return + + def init_key(self) -> None: + self.key = self.authenticate(self.password) + if self.key is None: + raise PDFPasswordIncorrect + return + + def is_printable(self) -> bool: + return bool(self.p & 4) + + def is_modifiable(self) -> bool: + return bool(self.p & 8) + + def is_extractable(self) -> bool: + return bool(self.p & 16) + + def compute_u(self, key: bytes) -> bytes: + if self.r == 2: + # Algorithm 3.4 + return Arcfour(key).encrypt(self.PASSWORD_PADDING) # 2 + else: + # Algorithm 3.5 + hash = md5(self.PASSWORD_PADDING) # 2 + hash.update(self.docid[0]) # 3 + result = Arcfour(key).encrypt(hash.digest()) # 4 + for i in range(1, 20): # 5 + k = b"".join(bytes((c ^ i,)) for c in iter(key)) + result = Arcfour(k).encrypt(result) + result += result # 6 + return result + + def compute_encryption_key(self, password: bytes) -> bytes: + # Algorithm 3.2 + password = (password + self.PASSWORD_PADDING)[:32] # 1 + hash = md5(password) # 2 + hash.update(self.o) # 3 + # See https://github.com/pdfminer/pdfminer.six/issues/186 + hash.update(struct.pack("= 4: + if not cast(PDFStandardSecurityHandlerV4, self).encrypt_metadata: + hash.update(b"\xff\xff\xff\xff") + result = hash.digest() + n = 5 + if self.r >= 3: + n = self.length // 8 + for _ in range(50): + result = md5(result[:n]).digest() + return result[:n] + + def authenticate(self, password: str) -> Optional[bytes]: + password_bytes = password.encode("latin1") + key = self.authenticate_user_password(password_bytes) + if key is None: + key = self.authenticate_owner_password(password_bytes) + return key + + def authenticate_user_password(self, password: bytes) -> Optional[bytes]: + key = self.compute_encryption_key(password) + if self.verify_encryption_key(key): + return key + else: + return None + + def verify_encryption_key(self, key: bytes) -> bool: + # Algorithm 3.6 + u = self.compute_u(key) + if self.r == 2: + return u == self.u + return u[:16] == self.u[:16] + + def authenticate_owner_password(self, password: bytes) -> Optional[bytes]: + # Algorithm 3.7 + password = (password + self.PASSWORD_PADDING)[:32] + hash = md5(password) + if self.r >= 3: + for _ in range(50): + hash = md5(hash.digest()) + n = 5 + if self.r >= 3: + n = self.length // 8 + key = hash.digest()[:n] + if self.r == 2: + user_password = Arcfour(key).decrypt(self.o) + else: + user_password = self.o + for i in range(19, -1, -1): + k = b"".join(bytes((c ^ i,)) for c in iter(key)) + user_password = Arcfour(k).decrypt(user_password) + return self.authenticate_user_password(user_password) + + def decrypt( + self, + objid: int, + genno: int, + data: bytes, + attrs: Optional[Dict[str, Any]] = None, + ) -> bytes: + return self.decrypt_rc4(objid, genno, data) + + def decrypt_rc4(self, objid: int, genno: int, data: bytes) -> bytes: + assert self.key is not None + key = self.key + struct.pack(" None: + super().init_params() + self.length = 128 + self.cf = dict_value(self.param.get("CF")) + self.stmf = literal_name(self.param["StmF"]) + self.strf = literal_name(self.param["StrF"]) + self.encrypt_metadata = bool(self.param.get("EncryptMetadata", True)) + if self.stmf != self.strf: + error_msg = "Unsupported crypt filter: param=%r" % self.param + raise PDFEncryptionError(error_msg) + self.cfm = {} + for k, v in self.cf.items(): + f = self.get_cfm(literal_name(v["CFM"])) + if f is None: + error_msg = "Unknown crypt filter method: param=%r" % self.param + raise PDFEncryptionError(error_msg) + self.cfm[k] = f + self.cfm["Identity"] = self.decrypt_identity + if self.strf not in self.cfm: + error_msg = "Undefined crypt filter: param=%r" % self.param + raise PDFEncryptionError(error_msg) + return + + def get_cfm(self, name: str) -> Optional[Callable[[int, int, bytes], bytes]]: + if name == "V2": + return self.decrypt_rc4 + elif name == "AESV2": + return self.decrypt_aes128 + else: + return None + + def decrypt( + self, + objid: int, + genno: int, + data: bytes, + attrs: Optional[Dict[str, Any]] = None, + name: Optional[str] = None, + ) -> bytes: + if not self.encrypt_metadata and attrs is not None: + t = attrs.get("Type") + if t is not None and literal_name(t) == "Metadata": + return data + if name is None: + name = self.strf + return self.cfm[name](objid, genno, data) + + def decrypt_identity(self, objid: int, genno: int, data: bytes) -> bytes: + return data + + def decrypt_aes128(self, objid: int, genno: int, data: bytes) -> bytes: + assert self.key is not None + key = ( + self.key + + struct.pack(" None: + super().init_params() + self.length = 256 + self.oe = str_value(self.param["OE"]) + self.ue = str_value(self.param["UE"]) + self.o_hash = self.o[:32] + self.o_validation_salt = self.o[32:40] + self.o_key_salt = self.o[40:] + self.u_hash = self.u[:32] + self.u_validation_salt = self.u[32:40] + self.u_key_salt = self.u[40:] + return + + def get_cfm(self, name: str) -> Optional[Callable[[int, int, bytes], bytes]]: + if name == "AESV3": + return self.decrypt_aes256 + else: + return None + + def authenticate(self, password: str) -> Optional[bytes]: + password_b = self._normalize_password(password) + hash = self._password_hash(password_b, self.o_validation_salt, self.u) + if hash == self.o_hash: + hash = self._password_hash(password_b, self.o_key_salt, self.u) + cipher = Cipher( + algorithms.AES(hash), modes.CBC(b"\0" * 16), backend=default_backend() + ) # type: ignore + return cipher.decryptor().update(self.oe) # type: ignore + hash = self._password_hash(password_b, self.u_validation_salt) + if hash == self.u_hash: + hash = self._password_hash(password_b, self.u_key_salt) + cipher = Cipher( + algorithms.AES(hash), modes.CBC(b"\0" * 16), backend=default_backend() + ) # type: ignore + return cipher.decryptor().update(self.ue) # type: ignore + return None + + def _normalize_password(self, password: str) -> bytes: + if self.r == 6: + # saslprep expects non-empty strings, apparently + if not password: + return b"" + from ._saslprep import saslprep + + password = saslprep(password) + return password.encode("utf-8")[:127] + + def _password_hash( + self, password: bytes, salt: bytes, vector: Optional[bytes] = None + ) -> bytes: + """ + Compute password hash depending on revision number + """ + if self.r == 5: + return self._r5_password(password, salt, vector) + return self._r6_password(password, salt[0:8], vector) + + def _r5_password( + self, password: bytes, salt: bytes, vector: Optional[bytes] = None + ) -> bytes: + """ + Compute the password for revision 5 + """ + hash = sha256(password) + hash.update(salt) + if vector is not None: + hash.update(vector) + return hash.digest() + + def _r6_password( + self, password: bytes, salt: bytes, vector: Optional[bytes] = None + ) -> bytes: + """ + Compute the password for revision 6 + """ + initial_hash = sha256(password) + initial_hash.update(salt) + if vector is not None: + initial_hash.update(vector) + k = initial_hash.digest() + hashes = (sha256, sha384, sha512) + round_no = last_byte_val = 0 + while round_no < 64 or last_byte_val > round_no - 32: + k1 = (password + k + (vector or b"")) * 64 + e = self._aes_cbc_encrypt(key=k[:16], iv=k[16:32], data=k1) + # compute the first 16 bytes of e, + # interpreted as an unsigned integer mod 3 + next_hash = hashes[self._bytes_mod_3(e[:16])] + k = next_hash(e).digest() + last_byte_val = e[len(e) - 1] + round_no += 1 + return k[:32] + + @staticmethod + def _bytes_mod_3(input_bytes: bytes) -> int: + # 256 is 1 mod 3, so we can just sum 'em + return sum(b % 3 for b in input_bytes) % 3 + + def _aes_cbc_encrypt(self, key: bytes, iv: bytes, data: bytes) -> bytes: + cipher = Cipher(algorithms.AES(key), modes.CBC(iv)) + encryptor = cipher.encryptor() # type: ignore + return encryptor.update(data) + encryptor.finalize() # type: ignore + + def decrypt_aes256(self, objid: int, genno: int, data: bytes) -> bytes: + initialization_vector = data[:16] + ciphertext = data[16:] + assert self.key is not None + cipher = Cipher( + algorithms.AES(self.key), + modes.CBC(initialization_vector), + backend=default_backend(), + ) # type: ignore + return cipher.decryptor().update(ciphertext) # type: ignore + + +class PDFDocument: + """PDFDocument object represents a PDF document. + + Since a PDF file can be very big, normally it is not loaded at + once. So PDF document has to cooperate with a PDF parser in order to + dynamically import the data as processing goes. + + Typical usage: + doc = PDFDocument(parser, password) + obj = doc.getobj(objid) + + """ + + security_handler_registry: Dict[int, Type[PDFStandardSecurityHandler]] = { + 1: PDFStandardSecurityHandler, + 2: PDFStandardSecurityHandler, + 4: PDFStandardSecurityHandlerV4, + 5: PDFStandardSecurityHandlerV5, + } + + def __init__( + self, + parser: PDFParser, + password: str = "", + caching: bool = True, + fallback: bool = True, + ) -> None: + "Set the document to use a given PDFParser object." + self.caching = caching + self.xrefs: List[PDFBaseXRef] = [] + self.info = [] + self.catalog: Dict[str, Any] = {} + self.encryption: Optional[Tuple[Any, Any]] = None + self.decipher: Optional[DecipherCallable] = None + self._parser = None + self._cached_objs: Dict[int, Tuple[object, int]] = {} + self._parsed_objs: Dict[int, Tuple[List[object], int]] = {} + self._parser = parser + self._parser.set_document(self) + self.is_printable = self.is_modifiable = self.is_extractable = True + # Retrieve the information of each header that was appended + # (maybe multiple times) at the end of the document. + try: + pos = self.find_xref(parser) + self.read_xref_from(parser, pos, self.xrefs) + except PDFNoValidXRef: + if fallback: + parser.fallback = True + newxref = PDFXRefFallback() + newxref.load(parser) + self.xrefs.append(newxref) + + for xref in self.xrefs: + trailer = xref.get_trailer() + if not trailer: + continue + # If there's an encryption info, remember it. + if "Encrypt" in trailer: + if "ID" in trailer: + id_value = list_value(trailer["ID"]) + else: + # Some documents may not have a /ID, use two empty + # byte strings instead. Solves + # https://github.com/pdfminer/pdfminer.six/issues/594 + id_value = (b"", b"") + self.encryption = (id_value, dict_value(trailer["Encrypt"])) + self._initialize_password(password) + if "Info" in trailer: + self.info.append(dict_value(trailer["Info"])) + if "Root" in trailer: + # Every PDF file must have exactly one /Root dictionary. + self.catalog = dict_value(trailer["Root"]) + break + else: + raise PDFSyntaxError("No /Root object! - Is this really a PDF?") + if self.catalog.get("Type") is not LITERAL_CATALOG: + if settings.STRICT: + raise PDFSyntaxError("Catalog not found!") + return + + KEYWORD_OBJ = KWD(b"obj") + + # _initialize_password(password=b'') + # Perform the initialization with a given password. + def _initialize_password(self, password: str = "") -> None: + assert self.encryption is not None + (docid, param) = self.encryption + if literal_name(param.get("Filter")) != "Standard": + raise PDFEncryptionError("Unknown filter: param=%r" % param) + v = int_value(param.get("V", 0)) + factory = self.security_handler_registry.get(v) + if factory is None: + raise PDFEncryptionError("Unknown algorithm: param=%r" % param) + handler = factory(docid, param, password) + self.decipher = handler.decrypt + self.is_printable = handler.is_printable() + self.is_modifiable = handler.is_modifiable() + self.is_extractable = handler.is_extractable() + assert self._parser is not None + self._parser.fallback = False # need to read streams with exact length + return + + def _getobj_objstm(self, stream: PDFStream, index: int, objid: int) -> object: + if stream.objid in self._parsed_objs: + (objs, n) = self._parsed_objs[stream.objid] + else: + (objs, n) = self._get_objects(stream) + if self.caching: + assert stream.objid is not None + self._parsed_objs[stream.objid] = (objs, n) + i = n * 2 + index + try: + obj = objs[i] + except IndexError: + raise PDFSyntaxError("index too big: %r" % index) + return obj + + def _get_objects(self, stream: PDFStream) -> Tuple[List[object], int]: + if stream.get("Type") is not LITERAL_OBJSTM: + if settings.STRICT: + raise PDFSyntaxError("Not a stream object: %r" % stream) + try: + n = cast(int, stream["N"]) + except KeyError: + if settings.STRICT: + raise PDFSyntaxError("N is not defined: %r" % stream) + n = 0 + parser = PDFStreamParser(stream.get_data()) + parser.set_document(self) + objs: List[object] = [] + try: + while 1: + (_, obj) = parser.nextobject() + objs.append(obj) + except PSEOF: + pass + return (objs, n) + + def _getobj_parse(self, pos: int, objid: int) -> object: + assert self._parser is not None + self._parser.seek(pos) + (_, objid1) = self._parser.nexttoken() # objid + (_, genno) = self._parser.nexttoken() # genno + (_, kwd) = self._parser.nexttoken() + # hack around malformed pdf files + # copied from https://github.com/jaepil/pdfminer3k/blob/master/ + # pdfminer/pdfparser.py#L399 + # to solve https://github.com/pdfminer/pdfminer.six/issues/56 + # assert objid1 == objid, str((objid1, objid)) + if objid1 != objid: + x = [] + while kwd is not self.KEYWORD_OBJ: + (_, kwd) = self._parser.nexttoken() + x.append(kwd) + if len(x) >= 2: + objid1 = x[-2] + # #### end hack around malformed pdf files + if objid1 != objid: + raise PDFSyntaxError(f"objid mismatch: {objid1!r}={objid!r}") + + if kwd != KWD(b"obj"): + raise PDFSyntaxError("Invalid object spec: offset=%r" % pos) + (_, obj) = self._parser.nextobject() + return obj + + # can raise PDFObjectNotFound + def getobj(self, objid: int) -> object: + """Get object from PDF + + :raises PDFException if PDFDocument is not initialized + :raises PDFObjectNotFound if objid does not exist in PDF + """ + if not self.xrefs: + raise PDFException("PDFDocument is not initialized") + log.debug("getobj: objid=%r", objid) + if objid in self._cached_objs: + (obj, genno) = self._cached_objs[objid] + else: + for xref in self.xrefs: + try: + (strmid, index, genno) = xref.get_pos(objid) + except KeyError: + continue + try: + if strmid is not None: + stream = stream_value(self.getobj(strmid)) + obj = self._getobj_objstm(stream, index, objid) + else: + obj = self._getobj_parse(index, objid) + if self.decipher: + obj = decipher_all(self.decipher, objid, genno, obj) + + if isinstance(obj, PDFStream): + obj.set_objid(objid, genno) + break + except (PSEOF, PDFSyntaxError): + continue + else: + raise PDFObjectNotFound(objid) + log.debug("register: objid=%r: %r", objid, obj) + if self.caching: + self._cached_objs[objid] = (obj, genno) + return obj + + OutlineType = Tuple[Any, Any, Any, Any, Any] + + def get_outlines(self) -> Iterator[OutlineType]: + if "Outlines" not in self.catalog: + raise PDFNoOutlines + + def search(entry: object, level: int) -> Iterator[PDFDocument.OutlineType]: + entry = dict_value(entry) + if "Title" in entry: + if "A" in entry or "Dest" in entry: + title = decode_text(str_value(entry["Title"])) + dest = entry.get("Dest") + action = entry.get("A") + se = entry.get("SE") + yield (level, title, dest, action, se) + if "First" in entry and "Last" in entry: + yield from search(entry["First"], level + 1) + if "Next" in entry: + yield from search(entry["Next"], level) + return + + return search(self.catalog["Outlines"], 0) + + def get_page_labels(self) -> Iterator[str]: + """ + Generate page label strings for the PDF document. + + If the document includes page labels, generates strings, one per page. + If not, raises PDFNoPageLabels. + + The resulting iteration is unbounded. + """ + assert self.catalog is not None + + try: + page_labels = PageLabels(self.catalog["PageLabels"]) + except (PDFTypeError, KeyError): + raise PDFNoPageLabels + + return page_labels.labels + + def lookup_name(self, cat: str, key: Union[str, bytes]) -> Any: + try: + names = dict_value(self.catalog["Names"]) + except (PDFTypeError, KeyError): + raise PDFKeyError((cat, key)) + # may raise KeyError + d0 = dict_value(names[cat]) + + def lookup(d: Dict[str, Any]) -> Any: + if "Limits" in d: + (k1, k2) = list_value(d["Limits"]) + if key < k1 or k2 < key: + return None + if "Names" in d: + objs = list_value(d["Names"]) + names = dict( + cast(Iterator[Tuple[Union[str, bytes], Any]], choplist(2, objs)) + ) + return names[key] + if "Kids" in d: + for c in list_value(d["Kids"]): + v = lookup(dict_value(c)) + if v: + return v + raise PDFKeyError((cat, key)) + + return lookup(d0) + + def get_dest(self, name: Union[str, bytes]) -> Any: + try: + # PDF-1.2 or later + obj = self.lookup_name("Dests", name) + except KeyError: + # PDF-1.1 or prior + if "Dests" not in self.catalog: + raise PDFDestinationNotFound(name) + d0 = dict_value(self.catalog["Dests"]) + if name not in d0: + raise PDFDestinationNotFound(name) + obj = d0[name] + return obj + + # find_xref + def find_xref(self, parser: PDFParser) -> int: + """Internal function used to locate the first XRef.""" + # search the last xref table by scanning the file backwards. + prev = None + for line in parser.revreadlines(): + line = line.strip() + log.debug("find_xref: %r", line) + if line == b"startxref": + break + if line: + prev = line + else: + raise PDFNoValidXRef("Unexpected EOF") + log.debug("xref found: pos=%r", prev) + assert prev is not None + return int(prev) + + # read xref table + def read_xref_from( + self, parser: PDFParser, start: int, xrefs: List[PDFBaseXRef] + ) -> None: + """Reads XRefs from the given location.""" + parser.seek(start) + parser.reset() + try: + (pos, token) = parser.nexttoken() + except PSEOF: + raise PDFNoValidXRef("Unexpected EOF") + log.debug("read_xref_from: start=%d, token=%r", start, token) + if isinstance(token, int): + # XRefStream: PDF-1.5 + parser.seek(pos) + parser.reset() + xref: PDFBaseXRef = PDFXRefStream() + xref.load(parser) + else: + if token is parser.KEYWORD_XREF: + parser.nextline() + xref = PDFXRef() + xref.load(parser) + xrefs.append(xref) + trailer = xref.get_trailer() + log.debug("trailer: %r", trailer) + if "XRefStm" in trailer: + pos = int_value(trailer["XRefStm"]) + self.read_xref_from(parser, pos, xrefs) + if "Prev" in trailer: + # find previous xref + pos = int_value(trailer["Prev"]) + self.read_xref_from(parser, pos, xrefs) + return + + +class PageLabels(NumberTree): + """PageLabels from the document catalog. + + See Section 8.3.1 in the PDF Reference. + """ + + @property + def labels(self) -> Iterator[str]: + ranges = self.values + + # The tree must begin with page index 0 + if len(ranges) == 0 or ranges[0][0] != 0: + if settings.STRICT: + raise PDFSyntaxError("PageLabels is missing page index 0") + else: + # Try to cope, by assuming empty labels for the initial pages + ranges.insert(0, (0, {})) + + for (next, (start, label_dict_unchecked)) in enumerate(ranges, 1): + label_dict = dict_value(label_dict_unchecked) + style = label_dict.get("S") + prefix = decode_text(str_value(label_dict.get("P", b""))) + first_value = int_value(label_dict.get("St", 1)) + + if next == len(ranges): + # This is the last specified range. It continues until the end + # of the document. + values: Iterable[int] = itertools.count(first_value) + else: + end, _ = ranges[next] + range_length = end - start + values = range(first_value, first_value + range_length) + + for value in values: + label = self._format_page_label(value, style) + yield prefix + label + + @staticmethod + def _format_page_label(value: int, style: Any) -> str: + """Format page label value in a specific style""" + if style is None: + label = "" + elif style is LIT("D"): # Decimal arabic numerals + label = str(value) + elif style is LIT("R"): # Uppercase roman numerals + label = format_int_roman(value).upper() + elif style is LIT("r"): # Lowercase roman numerals + label = format_int_roman(value) + elif style is LIT("A"): # Uppercase letters A-Z, AA-ZZ... + label = format_int_alpha(value).upper() + elif style is LIT("a"): # Lowercase letters a-z, aa-zz... + label = format_int_alpha(value) + else: + log.warning("Unknown page label style: %r", style) + label = "" + return label diff --git a/templates/skills/file_manager/dependencies/pdfminer/pdfexceptions.py b/templates/skills/file_manager/dependencies/pdfminer/pdfexceptions.py new file mode 100644 index 00000000..224c28fe --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/pdfexceptions.py @@ -0,0 +1,33 @@ +from pdfminer.psexceptions import PSException + + +class PDFException(PSException): + pass + + +class PDFTypeError(PDFException, TypeError): + pass + + +class PDFValueError(PDFException, ValueError): + pass + + +class PDFObjectNotFound(PDFException): + pass + + +class PDFNotImplementedError(PDFException, NotImplementedError): + pass + + +class PDFKeyError(PDFException, KeyError): + pass + + +class PDFEOFError(PDFException, EOFError): + pass + + +class PDFIOError(PDFException, IOError): + pass diff --git a/templates/skills/file_manager/dependencies/pdfminer/pdffont.py b/templates/skills/file_manager/dependencies/pdfminer/pdffont.py new file mode 100644 index 00000000..a32b55e4 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/pdffont.py @@ -0,0 +1,1215 @@ +import logging +import struct +import sys +from io import BytesIO +from typing import ( + Any, + BinaryIO, + Dict, + Iterable, + Iterator, + List, + Mapping, + Optional, + Tuple, + Union, + cast, + TYPE_CHECKING, +) + +from . import settings +from .cmapdb import CMap +from .cmapdb import CMapBase +from .cmapdb import CMapDB +from .cmapdb import CMapParser +from .cmapdb import FileUnicodeMap +from .cmapdb import IdentityUnicodeMap +from .cmapdb import UnicodeMap +from .encodingdb import EncodingDB +from .encodingdb import name2unicode +from .fontmetrics import FONT_METRICS +from .pdftypes import PDFStream +from pdfminer.pdfexceptions import PDFException, PDFValueError, PDFKeyError +from .pdftypes import dict_value +from .pdftypes import int_value +from .pdftypes import list_value +from .pdftypes import num_value +from .pdftypes import resolve1, resolve_all +from .pdftypes import stream_value +from .psparser import KWD +from pdfminer.psexceptions import PSEOF +from .psparser import LIT +from .psparser import PSKeyword +from .psparser import PSLiteral +from .psparser import PSStackParser +from .psparser import literal_name +from .utils import Matrix, Point +from .utils import Rect +from .utils import apply_matrix_norm +from .utils import choplist +from .utils import nunpack + +if TYPE_CHECKING: + from .pdfinterp import PDFResourceManager + +log = logging.getLogger(__name__) + + +def get_widths(seq: Iterable[object]) -> Dict[int, float]: + """Build a mapping of character widths for horizontal writing.""" + widths: Dict[int, float] = {} + r: List[float] = [] + for v in seq: + if isinstance(v, list): + if r: + char1 = r[-1] + for (i, w) in enumerate(v): + widths[cast(int, char1) + i] = w + r = [] + elif isinstance(v, (int, float)): # == utils.isnumber(v) + r.append(v) + if len(r) == 3: + (char1, char2, w) = r + for i in range(cast(int, char1), cast(int, char2) + 1): + widths[i] = w + r = [] + return widths + + +def get_widths2(seq: Iterable[object]) -> Dict[int, Tuple[float, Point]]: + """Build a mapping of character widths for vertical writing.""" + widths: Dict[int, Tuple[float, Point]] = {} + r: List[float] = [] + for v in seq: + if isinstance(v, list): + if r: + char1 = r[-1] + for (i, (w, vx, vy)) in enumerate(choplist(3, v)): + widths[cast(int, char1) + i] = (w, (vx, vy)) + r = [] + elif isinstance(v, (int, float)): # == utils.isnumber(v) + r.append(v) + if len(r) == 5: + (char1, char2, w, vx, vy) = r + for i in range(cast(int, char1), cast(int, char2) + 1): + widths[i] = (w, (vx, vy)) + r = [] + return widths + + +class FontMetricsDB: + @classmethod + def get_metrics(cls, fontname: str) -> Tuple[Dict[str, object], Dict[str, int]]: + return FONT_METRICS[fontname] + + +# int here means that we're not extending PSStackParser with additional types. +class Type1FontHeaderParser(PSStackParser[int]): + + KEYWORD_BEGIN = KWD(b"begin") + KEYWORD_END = KWD(b"end") + KEYWORD_DEF = KWD(b"def") + KEYWORD_PUT = KWD(b"put") + KEYWORD_DICT = KWD(b"dict") + KEYWORD_ARRAY = KWD(b"array") + KEYWORD_READONLY = KWD(b"readonly") + KEYWORD_FOR = KWD(b"for") + + def __init__(self, data: BinaryIO) -> None: + PSStackParser.__init__(self, data) + self._cid2unicode: Dict[int, str] = {} + return + + def get_encoding(self) -> Dict[int, str]: + """Parse the font encoding. + + The Type1 font encoding maps character codes to character names. These + character names could either be standard Adobe glyph names, or + character names associated with custom CharStrings for this font. A + CharString is a sequence of operations that describe how the character + should be drawn. Currently, this function returns '' (empty string) + for character names that are associated with a CharStrings. + + Reference: Adobe Systems Incorporated, Adobe Type 1 Font Format + + :returns mapping of character identifiers (cid's) to unicode characters + """ + while 1: + try: + (cid, name) = self.nextobject() + except PSEOF: + break + try: + self._cid2unicode[cid] = name2unicode(cast(str, name)) + except KeyError as e: + log.debug(str(e)) + return self._cid2unicode + + def do_keyword(self, pos: int, token: PSKeyword) -> None: + if token is self.KEYWORD_PUT: + ((_, key), (_, value)) = self.pop(2) + if isinstance(key, int) and isinstance(value, PSLiteral): + self.add_results((key, literal_name(value))) + return + + +NIBBLES = ("0", "1", "2", "3", "4", "5", "6", "7", "8", "9", ".", "e", "e-", None, "-") + +# Mapping of cmap names. Original cmap name is kept if not in the mapping. +# (missing reference for why DLIdent is mapped to Identity) +IDENTITY_ENCODER = { + "DLIdent-H": "Identity-H", + "DLIdent-V": "Identity-V", +} + + +def getdict(data: bytes) -> Dict[int, List[Union[float, int]]]: + d: Dict[int, List[Union[float, int]]] = {} + fp = BytesIO(data) + stack: List[Union[float, int]] = [] + while 1: + c = fp.read(1) + if not c: + break + b0 = ord(c) + if b0 <= 21: + d[b0] = stack + stack = [] + continue + if b0 == 30: + s = "" + loop = True + while loop: + b = ord(fp.read(1)) + for n in (b >> 4, b & 15): + if n == 15: + loop = False + else: + nibble = NIBBLES[n] + assert nibble is not None + s += nibble + value = float(s) + elif 32 <= b0 and b0 <= 246: + value = b0 - 139 + else: + b1 = ord(fp.read(1)) + if 247 <= b0 and b0 <= 250: + value = ((b0 - 247) << 8) + b1 + 108 + elif 251 <= b0 and b0 <= 254: + value = -((b0 - 251) << 8) - b1 - 108 + else: + b2 = ord(fp.read(1)) + if 128 <= b1: + b1 -= 256 + if b0 == 28: + value = b1 << 8 | b2 + else: + value = b1 << 24 | b2 << 16 | struct.unpack(">H", fp.read(2))[0] + stack.append(value) + return d + + +class CFFFont: + + STANDARD_STRINGS = ( + ".notdef", + "space", + "exclam", + "quotedbl", + "numbersign", + "dollar", + "percent", + "ampersand", + "quoteright", + "parenleft", + "parenright", + "asterisk", + "plus", + "comma", + "hyphen", + "period", + "slash", + "zero", + "one", + "two", + "three", + "four", + "five", + "six", + "seven", + "eight", + "nine", + "colon", + "semicolon", + "less", + "equal", + "greater", + "question", + "at", + "A", + "B", + "C", + "D", + "E", + "F", + "G", + "H", + "I", + "J", + "K", + "L", + "M", + "N", + "O", + "P", + "Q", + "R", + "S", + "T", + "U", + "V", + "W", + "X", + "Y", + "Z", + "bracketleft", + "backslash", + "bracketright", + "asciicircum", + "underscore", + "quoteleft", + "a", + "b", + "c", + "d", + "e", + "f", + "g", + "h", + "i", + "j", + "k", + "l", + "m", + "n", + "o", + "p", + "q", + "r", + "s", + "t", + "u", + "v", + "w", + "x", + "y", + "z", + "braceleft", + "bar", + "braceright", + "asciitilde", + "exclamdown", + "cent", + "sterling", + "fraction", + "yen", + "florin", + "section", + "currency", + "quotesingle", + "quotedblleft", + "guillemotleft", + "guilsinglleft", + "guilsinglright", + "fi", + "fl", + "endash", + "dagger", + "daggerdbl", + "periodcentered", + "paragraph", + "bullet", + "quotesinglbase", + "quotedblbase", + "quotedblright", + "guillemotright", + "ellipsis", + "perthousand", + "questiondown", + "grave", + "acute", + "circumflex", + "tilde", + "macron", + "breve", + "dotaccent", + "dieresis", + "ring", + "cedilla", + "hungarumlaut", + "ogonek", + "caron", + "emdash", + "AE", + "ordfeminine", + "Lslash", + "Oslash", + "OE", + "ordmasculine", + "ae", + "dotlessi", + "lslash", + "oslash", + "oe", + "germandbls", + "onesuperior", + "logicalnot", + "mu", + "trademark", + "Eth", + "onehalf", + "plusminus", + "Thorn", + "onequarter", + "divide", + "brokenbar", + "degree", + "thorn", + "threequarters", + "twosuperior", + "registered", + "minus", + "eth", + "multiply", + "threesuperior", + "copyright", + "Aacute", + "Acircumflex", + "Adieresis", + "Agrave", + "Aring", + "Atilde", + "Ccedilla", + "Eacute", + "Ecircumflex", + "Edieresis", + "Egrave", + "Iacute", + "Icircumflex", + "Idieresis", + "Igrave", + "Ntilde", + "Oacute", + "Ocircumflex", + "Odieresis", + "Ograve", + "Otilde", + "Scaron", + "Uacute", + "Ucircumflex", + "Udieresis", + "Ugrave", + "Yacute", + "Ydieresis", + "Zcaron", + "aacute", + "acircumflex", + "adieresis", + "agrave", + "aring", + "atilde", + "ccedilla", + "eacute", + "ecircumflex", + "edieresis", + "egrave", + "iacute", + "icircumflex", + "idieresis", + "igrave", + "ntilde", + "oacute", + "ocircumflex", + "odieresis", + "ograve", + "otilde", + "scaron", + "uacute", + "ucircumflex", + "udieresis", + "ugrave", + "yacute", + "ydieresis", + "zcaron", + "exclamsmall", + "Hungarumlautsmall", + "dollaroldstyle", + "dollarsuperior", + "ampersandsmall", + "Acutesmall", + "parenleftsuperior", + "parenrightsuperior", + "twodotenleader", + "onedotenleader", + "zerooldstyle", + "oneoldstyle", + "twooldstyle", + "threeoldstyle", + "fouroldstyle", + "fiveoldstyle", + "sixoldstyle", + "sevenoldstyle", + "eightoldstyle", + "nineoldstyle", + "commasuperior", + "threequartersemdash", + "periodsuperior", + "questionsmall", + "asuperior", + "bsuperior", + "centsuperior", + "dsuperior", + "esuperior", + "isuperior", + "lsuperior", + "msuperior", + "nsuperior", + "osuperior", + "rsuperior", + "ssuperior", + "tsuperior", + "ff", + "ffi", + "ffl", + "parenleftinferior", + "parenrightinferior", + "Circumflexsmall", + "hyphensuperior", + "Gravesmall", + "Asmall", + "Bsmall", + "Csmall", + "Dsmall", + "Esmall", + "Fsmall", + "Gsmall", + "Hsmall", + "Ismall", + "Jsmall", + "Ksmall", + "Lsmall", + "Msmall", + "Nsmall", + "Osmall", + "Psmall", + "Qsmall", + "Rsmall", + "Ssmall", + "Tsmall", + "Usmall", + "Vsmall", + "Wsmall", + "Xsmall", + "Ysmall", + "Zsmall", + "colonmonetary", + "onefitted", + "rupiah", + "Tildesmall", + "exclamdownsmall", + "centoldstyle", + "Lslashsmall", + "Scaronsmall", + "Zcaronsmall", + "Dieresissmall", + "Brevesmall", + "Caronsmall", + "Dotaccentsmall", + "Macronsmall", + "figuredash", + "hypheninferior", + "Ogoneksmall", + "Ringsmall", + "Cedillasmall", + "questiondownsmall", + "oneeighth", + "threeeighths", + "fiveeighths", + "seveneighths", + "onethird", + "twothirds", + "zerosuperior", + "foursuperior", + "fivesuperior", + "sixsuperior", + "sevensuperior", + "eightsuperior", + "ninesuperior", + "zeroinferior", + "oneinferior", + "twoinferior", + "threeinferior", + "fourinferior", + "fiveinferior", + "sixinferior", + "seveninferior", + "eightinferior", + "nineinferior", + "centinferior", + "dollarinferior", + "periodinferior", + "commainferior", + "Agravesmall", + "Aacutesmall", + "Acircumflexsmall", + "Atildesmall", + "Adieresissmall", + "Aringsmall", + "AEsmall", + "Ccedillasmall", + "Egravesmall", + "Eacutesmall", + "Ecircumflexsmall", + "Edieresissmall", + "Igravesmall", + "Iacutesmall", + "Icircumflexsmall", + "Idieresissmall", + "Ethsmall", + "Ntildesmall", + "Ogravesmall", + "Oacutesmall", + "Ocircumflexsmall", + "Otildesmall", + "Odieresissmall", + "OEsmall", + "Oslashsmall", + "Ugravesmall", + "Uacutesmall", + "Ucircumflexsmall", + "Udieresissmall", + "Yacutesmall", + "Thornsmall", + "Ydieresissmall", + "001.000", + "001.001", + "001.002", + "001.003", + "Black", + "Bold", + "Book", + "Light", + "Medium", + "Regular", + "Roman", + "Semibold", + ) + + class INDEX: + def __init__(self, fp: BinaryIO) -> None: + self.fp = fp + self.offsets: List[int] = [] + (count, offsize) = struct.unpack(">HB", self.fp.read(3)) + for i in range(count + 1): + self.offsets.append(nunpack(self.fp.read(offsize))) + self.base = self.fp.tell() - 1 + self.fp.seek(self.base + self.offsets[-1]) + return + + def __repr__(self) -> str: + return "" % len(self) + + def __len__(self) -> int: + return len(self.offsets) - 1 + + def __getitem__(self, i: int) -> bytes: + self.fp.seek(self.base + self.offsets[i]) + return self.fp.read(self.offsets[i + 1] - self.offsets[i]) + + def __iter__(self) -> Iterator[bytes]: + return iter(self[i] for i in range(len(self))) + + def __init__(self, name: str, fp: BinaryIO) -> None: + self.name = name + self.fp = fp + # Header + (_major, _minor, hdrsize, offsize) = struct.unpack("BBBB", self.fp.read(4)) + self.fp.read(hdrsize - 4) + # Name INDEX + self.name_index = self.INDEX(self.fp) + # Top DICT INDEX + self.dict_index = self.INDEX(self.fp) + # String INDEX + self.string_index = self.INDEX(self.fp) + # Global Subr INDEX + self.subr_index = self.INDEX(self.fp) + # Top DICT DATA + self.top_dict = getdict(self.dict_index[0]) + (charset_pos,) = self.top_dict.get(15, [0]) + (encoding_pos,) = self.top_dict.get(16, [0]) + (charstring_pos,) = self.top_dict.get(17, [0]) + # CharStrings + self.fp.seek(cast(int, charstring_pos)) + self.charstring = self.INDEX(self.fp) + self.nglyphs = len(self.charstring) + # Encodings + self.code2gid = {} + self.gid2code = {} + self.fp.seek(cast(int, encoding_pos)) + format = self.fp.read(1) + if format == b"\x00": + # Format 0 + (n,) = struct.unpack("B", self.fp.read(1)) + for (code, gid) in enumerate(struct.unpack("B" * n, self.fp.read(n))): + self.code2gid[code] = gid + self.gid2code[gid] = code + elif format == b"\x01": + # Format 1 + (n,) = struct.unpack("B", self.fp.read(1)) + code = 0 + for i in range(n): + (first, nleft) = struct.unpack("BB", self.fp.read(2)) + for gid in range(first, first + nleft + 1): + self.code2gid[code] = gid + self.gid2code[gid] = code + code += 1 + else: + raise PDFValueError("unsupported encoding format: %r" % format) + # Charsets + self.name2gid = {} + self.gid2name = {} + self.fp.seek(cast(int, charset_pos)) + format = self.fp.read(1) + if format == b"\x00": + # Format 0 + n = self.nglyphs - 1 + for (gid, sid) in enumerate( + cast(Tuple[int, ...], struct.unpack(">" + "H" * n, self.fp.read(2 * n))) + ): + gid += 1 + sidname = self.getstr(sid) + self.name2gid[sidname] = gid + self.gid2name[gid] = sidname + elif format == b"\x01": + # Format 1 + (n,) = struct.unpack("B", self.fp.read(1)) + sid = 0 + for i in range(n): + (first, nleft) = struct.unpack("BB", self.fp.read(2)) + for gid in range(first, first + nleft + 1): + sidname = self.getstr(sid) + self.name2gid[sidname] = gid + self.gid2name[gid] = sidname + sid += 1 + elif format == b"\x02": + # Format 2 + assert False, str(("Unhandled", format)) + else: + raise PDFValueError("unsupported charset format: %r" % format) + return + + def getstr(self, sid: int) -> Union[str, bytes]: + # This returns str for one of the STANDARD_STRINGS but bytes otherwise, + # and appears to be a needless source of type complexity. + if sid < len(self.STANDARD_STRINGS): + return self.STANDARD_STRINGS[sid] + return self.string_index[sid - len(self.STANDARD_STRINGS)] + + +class TrueTypeFont: + class CMapNotFound(PDFException): + pass + + def __init__(self, name: str, fp: BinaryIO) -> None: + self.name = name + self.fp = fp + self.tables: Dict[bytes, Tuple[int, int]] = {} + self.fonttype = fp.read(4) + try: + (ntables, _1, _2, _3) = cast( + Tuple[int, int, int, int], struct.unpack(">HHHH", fp.read(8)) + ) + for _ in range(ntables): + (name_bytes, tsum, offset, length) = cast( + Tuple[bytes, int, int, int], struct.unpack(">4sLLL", fp.read(16)) + ) + self.tables[name_bytes] = (offset, length) + except struct.error: + # Do not fail if there are not enough bytes to read. Even for + # corrupted PDFs we would like to get as much information as + # possible, so continue. + pass + return + + def create_unicode_map(self) -> FileUnicodeMap: + if b"cmap" not in self.tables: + raise TrueTypeFont.CMapNotFound + (base_offset, length) = self.tables[b"cmap"] + fp = self.fp + fp.seek(base_offset) + (version, nsubtables) = cast(Tuple[int, int], struct.unpack(">HH", fp.read(4))) + subtables: List[Tuple[int, int, int]] = [] + for i in range(nsubtables): + subtables.append( + cast(Tuple[int, int, int], struct.unpack(">HHL", fp.read(8))) + ) + char2gid: Dict[int, int] = {} + # Only supports subtable type 0, 2 and 4. + for (platform_id, encoding_id, st_offset) in subtables: + # Skip non-Unicode cmaps. + # https://docs.microsoft.com/en-us/typography/opentype/spec/cmap + if not (platform_id == 0 or (platform_id == 3 and encoding_id in [1, 10])): + continue + fp.seek(base_offset + st_offset) + (fmttype, fmtlen, fmtlang) = cast( + Tuple[int, int, int], struct.unpack(">HHH", fp.read(6)) + ) + if fmttype == 0: + char2gid.update( + enumerate( + cast(Tuple[int, ...], struct.unpack(">256B", fp.read(256))) + ) + ) + elif fmttype == 2: + subheaderkeys = cast( + Tuple[int, ...], struct.unpack(">256H", fp.read(512)) + ) + firstbytes = [0] * 8192 + for (i, k) in enumerate(subheaderkeys): + firstbytes[k // 8] = i + nhdrs = max(subheaderkeys) // 8 + 1 + hdrs: List[Tuple[int, int, int, int, int]] = [] + for i in range(nhdrs): + (firstcode, entcount, delta, offset) = cast( + Tuple[int, int, int, int], struct.unpack(">HHhH", fp.read(8)) + ) + hdrs.append((i, firstcode, entcount, delta, fp.tell() - 2 + offset)) + for (i, firstcode, entcount, delta, pos) in hdrs: + if not entcount: + continue + first = firstcode + (firstbytes[i] << 8) + fp.seek(pos) + for c in range(entcount): + gid = cast(Tuple[int], struct.unpack(">H", fp.read(2)))[0] + if gid: + gid += delta + char2gid[first + c] = gid + elif fmttype == 4: + (segcount, _1, _2, _3) = cast( + Tuple[int, int, int, int], struct.unpack(">HHHH", fp.read(8)) + ) + segcount //= 2 + ecs = cast( + Tuple[int, ...], + struct.unpack(">%dH" % segcount, fp.read(2 * segcount)), + ) + fp.read(2) + scs = cast( + Tuple[int, ...], + struct.unpack(">%dH" % segcount, fp.read(2 * segcount)), + ) + idds = cast( + Tuple[int, ...], + struct.unpack(">%dh" % segcount, fp.read(2 * segcount)), + ) + pos = fp.tell() + idrs = cast( + Tuple[int, ...], + struct.unpack(">%dH" % segcount, fp.read(2 * segcount)), + ) + for (ec, sc, idd, idr) in zip(ecs, scs, idds, idrs): + if idr: + fp.seek(pos + idr) + for c in range(sc, ec + 1): + b = cast(Tuple[int], struct.unpack(">H", fp.read(2)))[0] + char2gid[c] = (b + idd) & 0xFFFF + else: + for c in range(sc, ec + 1): + char2gid[c] = (c + idd) & 0xFFFF + else: + assert False, str(("Unhandled", fmttype)) + if not char2gid: + raise TrueTypeFont.CMapNotFound + # create unicode map + unicode_map = FileUnicodeMap() + for (char, gid) in char2gid.items(): + unicode_map.add_cid2unichr(gid, char) + return unicode_map + + +class PDFFontError(PDFException): + pass + + +class PDFUnicodeNotDefined(PDFFontError): + pass + + +LITERAL_STANDARD_ENCODING = LIT("StandardEncoding") +LITERAL_TYPE1C = LIT("Type1C") + +# Font widths are maintained in a dict type that maps from *either* unicode +# chars or integer character IDs. +FontWidthDict = Union[Dict[int, float], Dict[str, float]] + + +class PDFFont: + def __init__( + self, + descriptor: Mapping[str, Any], + widths: FontWidthDict, + default_width: Optional[float] = None, + ) -> None: + self.descriptor = descriptor + self.widths: FontWidthDict = resolve_all(widths) + self.fontname = resolve1(descriptor.get("FontName", "unknown")) + if isinstance(self.fontname, PSLiteral): + self.fontname = literal_name(self.fontname) + self.flags = int_value(descriptor.get("Flags", 0)) + self.ascent = num_value(descriptor.get("Ascent", 0)) + self.descent = num_value(descriptor.get("Descent", 0)) + self.italic_angle = num_value(descriptor.get("ItalicAngle", 0)) + if default_width is None: + self.default_width = num_value(descriptor.get("MissingWidth", 0)) + else: + self.default_width = default_width + self.default_width = resolve1(self.default_width) + self.leading = num_value(descriptor.get("Leading", 0)) + self.bbox = cast( + Rect, list_value(resolve_all(descriptor.get("FontBBox", (0, 0, 0, 0)))) + ) + self.hscale = self.vscale = 0.001 + + # PDF RM 9.8.1 specifies /Descent should always be a negative number. + # PScript5.dll seems to produce Descent with a positive number, but + # text analysis will be wrong if this is taken as correct. So force + # descent to negative. + if self.descent > 0: + self.descent = -self.descent + return + + def __repr__(self) -> str: + return "" + + def is_vertical(self) -> bool: + return False + + def is_multibyte(self) -> bool: + return False + + def decode(self, bytes: bytes) -> Iterable[int]: + return bytearray(bytes) # map(ord, bytes) + + def get_ascent(self) -> float: + """Ascent above the baseline, in text space units""" + return self.ascent * self.vscale + + def get_descent(self) -> float: + """Descent below the baseline, in text space units; always negative""" + return self.descent * self.vscale + + def get_width(self) -> float: + w = self.bbox[2] - self.bbox[0] + if w == 0: + w = -self.default_width + return w * self.hscale + + def get_height(self) -> float: + h = self.bbox[3] - self.bbox[1] + if h == 0: + h = self.ascent - self.descent + return h * self.vscale + + def char_width(self, cid: int) -> float: + # Because character widths may be mapping either IDs or strings, + # we try to lookup the character ID first, then its str equivalent. + try: + return cast(Dict[int, float], self.widths)[cid] * self.hscale + except KeyError: + str_widths = cast(Dict[str, float], self.widths) + try: + return str_widths[self.to_unichr(cid)] * self.hscale + except (KeyError, PDFUnicodeNotDefined): + return self.default_width * self.hscale + + def char_disp(self, cid: int) -> Union[float, Tuple[Optional[float], float]]: + "Returns an integer for horizontal fonts, a tuple for vertical fonts." + return 0 + + def string_width(self, s: bytes) -> float: + return sum(self.char_width(cid) for cid in self.decode(s)) + + def to_unichr(self, cid: int) -> str: + raise NotImplementedError + + +class PDFSimpleFont(PDFFont): + def __init__( + self, + descriptor: Mapping[str, Any], + widths: FontWidthDict, + spec: Mapping[str, Any], + ) -> None: + # Font encoding is specified either by a name of + # built-in encoding or a dictionary that describes + # the differences. + if "Encoding" in spec: + encoding = resolve1(spec["Encoding"]) + else: + encoding = LITERAL_STANDARD_ENCODING + if isinstance(encoding, dict): + name = literal_name(encoding.get("BaseEncoding", LITERAL_STANDARD_ENCODING)) + diff = list_value(encoding.get("Differences", [])) + self.cid2unicode = EncodingDB.get_encoding(name, diff) + else: + self.cid2unicode = EncodingDB.get_encoding(literal_name(encoding)) + self.unicode_map: Optional[UnicodeMap] = None + if "ToUnicode" in spec: + strm = stream_value(spec["ToUnicode"]) + self.unicode_map = FileUnicodeMap() + CMapParser(self.unicode_map, BytesIO(strm.get_data())).run() + PDFFont.__init__(self, descriptor, widths) + return + + def to_unichr(self, cid: int) -> str: + if self.unicode_map: + try: + return self.unicode_map.get_unichr(cid) + except KeyError: + pass + try: + return self.cid2unicode[cid] + except KeyError: + raise PDFUnicodeNotDefined(None, cid) + + +class PDFType1Font(PDFSimpleFont): + def __init__(self, rsrcmgr: "PDFResourceManager", spec: Mapping[str, Any]) -> None: + try: + self.basefont = literal_name(spec["BaseFont"]) + except KeyError: + if settings.STRICT: + raise PDFFontError("BaseFont is missing") + self.basefont = "unknown" + + widths: FontWidthDict + try: + (descriptor, int_widths) = FontMetricsDB.get_metrics(self.basefont) + widths = cast(Dict[str, float], int_widths) # implicit int->float + except KeyError: + descriptor = dict_value(spec.get("FontDescriptor", {})) + firstchar = int_value(spec.get("FirstChar", 0)) + # lastchar = int_value(spec.get('LastChar', 255)) + width_list = list_value(spec.get("Widths", [0] * 256)) + widths = {i + firstchar: resolve1(w) for (i, w) in enumerate(width_list)} + PDFSimpleFont.__init__(self, descriptor, widths, spec) + if "Encoding" not in spec and "FontFile" in descriptor: + # try to recover the missing encoding info from the font file. + self.fontfile = stream_value(descriptor.get("FontFile")) + length1 = int_value(self.fontfile["Length1"]) + data = self.fontfile.get_data()[:length1] + parser = Type1FontHeaderParser(BytesIO(data)) + self.cid2unicode = parser.get_encoding() + return + + def __repr__(self) -> str: + return "" % self.basefont + + +class PDFTrueTypeFont(PDFType1Font): + def __repr__(self) -> str: + return "" % self.basefont + + +class PDFType3Font(PDFSimpleFont): + def __init__(self, rsrcmgr: "PDFResourceManager", spec: Mapping[str, Any]) -> None: + firstchar = int_value(spec.get("FirstChar", 0)) + # lastchar = int_value(spec.get('LastChar', 0)) + width_list = list_value(spec.get("Widths", [0] * 256)) + widths = {i + firstchar: w for (i, w) in enumerate(width_list)} + if "FontDescriptor" in spec: + descriptor = dict_value(spec["FontDescriptor"]) + else: + descriptor = {"Ascent": 0, "Descent": 0, "FontBBox": spec["FontBBox"]} + PDFSimpleFont.__init__(self, descriptor, widths, spec) + self.matrix = cast(Matrix, tuple(list_value(spec.get("FontMatrix")))) + (_, self.descent, _, self.ascent) = self.bbox + (self.hscale, self.vscale) = apply_matrix_norm(self.matrix, (1, 1)) + return + + def __repr__(self) -> str: + return "" + + +class PDFCIDFont(PDFFont): + default_disp: Union[float, Tuple[Optional[float], float]] + + def __init__( + self, + rsrcmgr: "PDFResourceManager", + spec: Mapping[str, Any], + strict: bool = settings.STRICT, + ) -> None: + try: + self.basefont = literal_name(spec["BaseFont"]) + except KeyError: + if strict: + raise PDFFontError("BaseFont is missing") + self.basefont = "unknown" + self.cidsysteminfo = dict_value(spec.get("CIDSystemInfo", {})) + cid_registry = resolve1(self.cidsysteminfo.get("Registry", b"unknown")).decode( + "latin1" + ) + cid_ordering = resolve1(self.cidsysteminfo.get("Ordering", b"unknown")).decode( + "latin1" + ) + self.cidcoding = f"{cid_registry.strip()}-{cid_ordering.strip()}" + self.cmap: CMapBase = self.get_cmap_from_spec(spec, strict) + + try: + descriptor = dict_value(spec["FontDescriptor"]) + except KeyError: + if strict: + raise PDFFontError("FontDescriptor is missing") + descriptor = {} + ttf = None + if "FontFile2" in descriptor: + self.fontfile = stream_value(descriptor.get("FontFile2")) + ttf = TrueTypeFont(self.basefont, BytesIO(self.fontfile.get_data())) + self.unicode_map: Optional[UnicodeMap] = None + if "ToUnicode" in spec: + if isinstance(spec["ToUnicode"], PDFStream): + strm = stream_value(spec["ToUnicode"]) + self.unicode_map = FileUnicodeMap() + CMapParser(self.unicode_map, BytesIO(strm.get_data())).run() + else: + cmap_name = literal_name(spec["ToUnicode"]) + encoding = literal_name(spec["Encoding"]) + if ( + "Identity" in cid_ordering + or "Identity" in cmap_name + or "Identity" in encoding + ): + self.unicode_map = IdentityUnicodeMap() + elif self.cidcoding in ("Adobe-Identity", "Adobe-UCS"): + if ttf: + try: + self.unicode_map = ttf.create_unicode_map() + except TrueTypeFont.CMapNotFound: + pass + else: + try: + self.unicode_map = CMapDB.get_unicode_map( + self.cidcoding, self.cmap.is_vertical() + ) + except CMapDB.CMapNotFound: + pass + + self.vertical = self.cmap.is_vertical() + if self.vertical: + # writing mode: vertical + widths2 = get_widths2(list_value(spec.get("W2", []))) + self.disps = {cid: (vx, vy) for (cid, (_, (vx, vy))) in widths2.items()} + (vy, w) = resolve1(spec.get("DW2", [880, -1000])) + self.default_disp = (None, vy) + widths = {cid: w for (cid, (w, _)) in widths2.items()} + default_width = w + else: + # writing mode: horizontal + self.disps = {} + self.default_disp = 0 + widths = get_widths(list_value(spec.get("W", []))) + default_width = spec.get("DW", 1000) + PDFFont.__init__(self, descriptor, widths, default_width=default_width) + return + + def get_cmap_from_spec(self, spec: Mapping[str, Any], strict: bool) -> CMapBase: + """Get cmap from font specification + + For certain PDFs, Encoding Type isn't mentioned as an attribute of + Encoding but as an attribute of CMapName, where CMapName is an + attribute of spec['Encoding']. + The horizontal/vertical modes are mentioned with different name + such as 'DLIdent-H/V','OneByteIdentityH/V','Identity-H/V'. + """ + cmap_name = self._get_cmap_name(spec, strict) + + try: + return CMapDB.get_cmap(cmap_name) + except CMapDB.CMapNotFound as e: + if strict: + raise PDFFontError(e) + return CMap() + + @staticmethod + def _get_cmap_name(spec: Mapping[str, Any], strict: bool) -> str: + """Get cmap name from font specification""" + cmap_name = "unknown" # default value + + try: + spec_encoding = spec["Encoding"] + if hasattr(spec_encoding, "name"): + cmap_name = literal_name(spec["Encoding"]) + else: + cmap_name = literal_name(spec_encoding["CMapName"]) + except KeyError: + if strict: + raise PDFFontError("Encoding is unspecified") + + if type(cmap_name) is PDFStream: # type: ignore[comparison-overlap] + cmap_name_stream: PDFStream = cast(PDFStream, cmap_name) + if "CMapName" in cmap_name_stream: + cmap_name = cmap_name_stream.get("CMapName").name + else: + if strict: + raise PDFFontError("CMapName unspecified for encoding") + + return IDENTITY_ENCODER.get(cmap_name, cmap_name) + + def __repr__(self) -> str: + return "".format( + self.basefont, self.cidcoding + ) + + def is_vertical(self) -> bool: + return self.vertical + + def is_multibyte(self) -> bool: + return True + + def decode(self, bytes: bytes) -> Iterable[int]: + return self.cmap.decode(bytes) + + def char_disp(self, cid: int) -> Union[float, Tuple[Optional[float], float]]: + "Returns an integer for horizontal fonts, a tuple for vertical fonts." + return self.disps.get(cid, self.default_disp) + + def to_unichr(self, cid: int) -> str: + try: + if not self.unicode_map: + raise PDFKeyError(cid) + return self.unicode_map.get_unichr(cid) + except KeyError: + raise PDFUnicodeNotDefined(self.cidcoding, cid) + + +def main(argv: List[str]) -> None: + from warnings import warn + + warn( + "The function main() from pdffont.py will be removed in 2023. It was probably " + "introduced for testing purposes a long time ago, and no longer relevant. " + "Feel free to create a GitHub issue if you disagree.", + DeprecationWarning, + ) + + for fname in argv[1:]: + fp = open(fname, "rb") + font = CFFFont(fname, fp) + print(font) + fp.close() + return + + +if __name__ == "__main__": + main(sys.argv) diff --git a/templates/skills/file_manager/dependencies/pdfminer/pdfinterp.py b/templates/skills/file_manager/dependencies/pdfminer/pdfinterp.py new file mode 100644 index 00000000..3ff2c144 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/pdfinterp.py @@ -0,0 +1,1052 @@ +import logging +import re +from io import BytesIO +from typing import Dict, List, Mapping, Optional, Sequence, Tuple, Union, cast + +from . import settings +from .cmapdb import CMap +from .cmapdb import CMapBase +from .cmapdb import CMapDB +from .pdfcolor import PDFColorSpace +from .pdfcolor import PREDEFINED_COLORSPACE +from .pdfdevice import PDFDevice +from .pdfdevice import PDFTextSeq +from .pdffont import PDFCIDFont +from .pdffont import PDFFont +from .pdffont import PDFFontError +from .pdffont import PDFTrueTypeFont +from .pdffont import PDFType1Font +from .pdffont import PDFType3Font +from .pdfpage import PDFPage +from .pdftypes import PDFObjRef +from .pdfexceptions import PDFException +from .pdftypes import PDFStream +from .pdftypes import dict_value +from .pdftypes import list_value +from .pdftypes import resolve1 +from .pdftypes import stream_value +from .psparser import KWD +from .psexceptions import PSEOF, PSTypeError +from .psparser import LIT +from .psparser import PSKeyword +from .psparser import PSLiteral +from .psparser import PSStackParser +from .psparser import PSStackType +from .psparser import keyword_name +from .psparser import literal_name +from .utils import MATRIX_IDENTITY +from .utils import Matrix, Point, PathSegment, Rect +from .utils import choplist +from .utils import mult_matrix + +log = logging.getLogger(__name__) + + +class PDFResourceError(PDFException): + pass + + +class PDFInterpreterError(PDFException): + pass + + +LITERAL_PDF = LIT("PDF") +LITERAL_TEXT = LIT("Text") +LITERAL_FONT = LIT("Font") +LITERAL_FORM = LIT("Form") +LITERAL_IMAGE = LIT("Image") + + +class PDFTextState: + matrix: Matrix + linematrix: Point + + def __init__(self) -> None: + self.font: Optional[PDFFont] = None + self.fontsize: float = 0 + self.charspace: float = 0 + self.wordspace: float = 0 + self.scaling: float = 100 + self.leading: float = 0 + self.render: int = 0 + self.rise: float = 0 + self.reset() + # self.matrix is set + # self.linematrix is set + + def __repr__(self) -> str: + return ( + "" + % ( + self.font, + self.fontsize, + self.charspace, + self.wordspace, + self.scaling, + self.leading, + self.render, + self.rise, + self.matrix, + self.linematrix, + ) + ) + + def copy(self) -> "PDFTextState": + obj = PDFTextState() + obj.font = self.font + obj.fontsize = self.fontsize + obj.charspace = self.charspace + obj.wordspace = self.wordspace + obj.scaling = self.scaling + obj.leading = self.leading + obj.render = self.render + obj.rise = self.rise + obj.matrix = self.matrix + obj.linematrix = self.linematrix + return obj + + def reset(self) -> None: + self.matrix = MATRIX_IDENTITY + self.linematrix = (0, 0) + + +Color = Union[ + float, # Greyscale + Tuple[float, float, float], # R, G, B + Tuple[float, float, float, float], # C, M, Y, K +] + + +class PDFGraphicState: + def __init__(self) -> None: + self.linewidth: float = 0 + self.linecap: Optional[object] = None + self.linejoin: Optional[object] = None + self.miterlimit: Optional[object] = None + self.dash: Optional[Tuple[object, object]] = None + self.intent: Optional[object] = None + self.flatness: Optional[object] = None + + # stroking color + self.scolor: Optional[Color] = None + + # non stroking color + self.ncolor: Optional[Color] = None + + def copy(self) -> "PDFGraphicState": + obj = PDFGraphicState() + obj.linewidth = self.linewidth + obj.linecap = self.linecap + obj.linejoin = self.linejoin + obj.miterlimit = self.miterlimit + obj.dash = self.dash + obj.intent = self.intent + obj.flatness = self.flatness + obj.scolor = self.scolor + obj.ncolor = self.ncolor + return obj + + def __repr__(self) -> str: + return ( + "" + % ( + self.linewidth, + self.linecap, + self.linejoin, + self.miterlimit, + self.dash, + self.intent, + self.flatness, + self.scolor, + self.ncolor, + ) + ) + + +class PDFResourceManager: + """Repository of shared resources. + + ResourceManager facilitates reuse of shared resources + such as fonts and images so that large objects are not + allocated multiple times. + """ + + def __init__(self, caching: bool = True) -> None: + self.caching = caching + self._cached_fonts: Dict[object, PDFFont] = {} + + def get_procset(self, procs: Sequence[object]) -> None: + for proc in procs: + if proc is LITERAL_PDF: + pass + elif proc is LITERAL_TEXT: + pass + else: + pass + + def get_cmap(self, cmapname: str, strict: bool = False) -> CMapBase: + try: + return CMapDB.get_cmap(cmapname) + except CMapDB.CMapNotFound: + if strict: + raise + return CMap() + + def get_font(self, objid: object, spec: Mapping[str, object]) -> PDFFont: + if objid and objid in self._cached_fonts: + font = self._cached_fonts[objid] + else: + log.debug("get_font: create: objid=%r, spec=%r", objid, spec) + if settings.STRICT: + if spec["Type"] is not LITERAL_FONT: + raise PDFFontError("Type is not /Font") + # Create a Font object. + if "Subtype" in spec: + subtype = literal_name(spec["Subtype"]) + else: + if settings.STRICT: + raise PDFFontError("Font Subtype is not specified.") + subtype = "Type1" + if subtype in ("Type1", "MMType1"): + # Type1 Font + font = PDFType1Font(self, spec) + elif subtype == "TrueType": + # TrueType Font + font = PDFTrueTypeFont(self, spec) + elif subtype == "Type3": + # Type3 Font + font = PDFType3Font(self, spec) + elif subtype in ("CIDFontType0", "CIDFontType2"): + # CID Font + font = PDFCIDFont(self, spec) + elif subtype == "Type0": + # Type0 Font + dfonts = list_value(spec["DescendantFonts"]) + assert dfonts + subspec = dict_value(dfonts[0]).copy() + for k in ("Encoding", "ToUnicode"): + if k in spec: + subspec[k] = resolve1(spec[k]) + font = self.get_font(None, subspec) + else: + if settings.STRICT: + raise PDFFontError("Invalid Font spec: %r" % spec) + font = PDFType1Font(self, spec) # this is so wrong! + if objid and self.caching: + self._cached_fonts[objid] = font + return font + + +class PDFContentParser(PSStackParser[Union[PSKeyword, PDFStream]]): + def __init__(self, streams: Sequence[object]) -> None: + self.streams = streams + self.istream = 0 + # PSStackParser.__init__(fp=None) is safe only because we've overloaded + # all the methods that would attempt to access self.fp without first + # calling self.fillfp(). + PSStackParser.__init__(self, None) # type: ignore[arg-type] + + def fillfp(self) -> None: + if not self.fp: + if self.istream < len(self.streams): + strm = stream_value(self.streams[self.istream]) + self.istream += 1 + else: + raise PSEOF("Unexpected EOF, file truncated?") + self.fp = BytesIO(strm.get_data()) + + def seek(self, pos: int) -> None: + self.fillfp() + PSStackParser.seek(self, pos) + + def fillbuf(self) -> None: + if self.charpos < len(self.buf): + return + while 1: + self.fillfp() + self.bufpos = self.fp.tell() + self.buf = self.fp.read(self.BUFSIZ) + if self.buf: + break + self.fp = None # type: ignore[assignment] + self.charpos = 0 + + def get_inline_data(self, pos: int, target: bytes = b"EI") -> Tuple[int, bytes]: + self.seek(pos) + i = 0 + data = b"" + while i <= len(target): + self.fillbuf() + if i: + ci = self.buf[self.charpos] + c = bytes((ci,)) + data += c + self.charpos += 1 + if len(target) <= i and c.isspace(): + i += 1 + elif i < len(target) and c == (bytes((target[i],))): + i += 1 + else: + i = 0 + else: + try: + j = self.buf.index(target[0], self.charpos) + data += self.buf[self.charpos : j + 1] + self.charpos = j + 1 + i = 1 + except ValueError: + data += self.buf[self.charpos :] + self.charpos = len(self.buf) + data = data[: -(len(target) + 1)] # strip the last part + data = re.sub(rb"(\x0d\x0a|[\x0d\x0a])$", b"", data) + return (pos, data) + + def flush(self) -> None: + self.add_results(*self.popall()) + + KEYWORD_BI = KWD(b"BI") + KEYWORD_ID = KWD(b"ID") + KEYWORD_EI = KWD(b"EI") + + def do_keyword(self, pos: int, token: PSKeyword) -> None: + if token is self.KEYWORD_BI: + # inline image within a content stream + self.start_type(pos, "inline") + elif token is self.KEYWORD_ID: + try: + (_, objs) = self.end_type("inline") + if len(objs) % 2 != 0: + error_msg = f"Invalid dictionary construct: {objs!r}" + raise PSTypeError(error_msg) + d = {literal_name(k): v for (k, v) in choplist(2, objs)} + (pos, data) = self.get_inline_data(pos + len(b"ID ")) + obj = PDFStream(d, data) + self.push((pos, obj)) + self.push((pos, self.KEYWORD_EI)) + except PSTypeError: + if settings.STRICT: + raise + else: + self.push((pos, token)) + + +PDFStackT = PSStackType[PDFStream] +"""Types that may appear on the PDF argument stack.""" + + +class PDFPageInterpreter: + """Processor for the content of a PDF page + + Reference: PDF Reference, Appendix A, Operator Summary + """ + + def __init__(self, rsrcmgr: PDFResourceManager, device: PDFDevice) -> None: + self.rsrcmgr = rsrcmgr + self.device = device + return + + def dup(self) -> "PDFPageInterpreter": + return self.__class__(self.rsrcmgr, self.device) + + def init_resources(self, resources: Dict[object, object]) -> None: + """Prepare the fonts and XObjects listed in the Resource attribute.""" + self.resources = resources + self.fontmap: Dict[object, PDFFont] = {} + self.xobjmap = {} + self.csmap: Dict[str, PDFColorSpace] = PREDEFINED_COLORSPACE.copy() + if not resources: + return + + def get_colorspace(spec: object) -> Optional[PDFColorSpace]: + if isinstance(spec, list): + name = literal_name(spec[0]) + else: + name = literal_name(spec) + if name == "ICCBased" and isinstance(spec, list) and 2 <= len(spec): + return PDFColorSpace(name, stream_value(spec[1])["N"]) + elif name == "DeviceN" and isinstance(spec, list) and 2 <= len(spec): + return PDFColorSpace(name, len(list_value(spec[1]))) + else: + return PREDEFINED_COLORSPACE.get(name) + + for (k, v) in dict_value(resources).items(): + log.debug("Resource: %r: %r", k, v) + if k == "Font": + for (fontid, spec) in dict_value(v).items(): + objid = None + if isinstance(spec, PDFObjRef): + objid = spec.objid + spec = dict_value(spec) + self.fontmap[fontid] = self.rsrcmgr.get_font(objid, spec) + elif k == "ColorSpace": + for (csid, spec) in dict_value(v).items(): + colorspace = get_colorspace(resolve1(spec)) + if colorspace is not None: + self.csmap[csid] = colorspace + elif k == "ProcSet": + self.rsrcmgr.get_procset(list_value(v)) + elif k == "XObject": + for (xobjid, xobjstrm) in dict_value(v).items(): + self.xobjmap[xobjid] = xobjstrm + return + + def init_state(self, ctm: Matrix) -> None: + """Initialize the text and graphic states for rendering a page.""" + # gstack: stack for graphical states. + self.gstack: List[Tuple[Matrix, PDFTextState, PDFGraphicState]] = [] + self.ctm = ctm + self.device.set_ctm(self.ctm) + self.textstate = PDFTextState() + self.graphicstate = PDFGraphicState() + self.curpath: List[PathSegment] = [] + # argstack: stack for command arguments. + self.argstack: List[PDFStackT] = [] + # set some global states. + self.scs: Optional[PDFColorSpace] = None + self.ncs: Optional[PDFColorSpace] = None + if self.csmap: + self.scs = self.ncs = next(iter(self.csmap.values())) + return + + def push(self, obj: PDFStackT) -> None: + self.argstack.append(obj) + return + + def pop(self, n: int) -> List[PDFStackT]: + if n == 0: + return [] + x = self.argstack[-n:] + self.argstack = self.argstack[:-n] + return x + + def get_current_state(self) -> Tuple[Matrix, PDFTextState, PDFGraphicState]: + return (self.ctm, self.textstate.copy(), self.graphicstate.copy()) + + def set_current_state( + self, state: Tuple[Matrix, PDFTextState, PDFGraphicState] + ) -> None: + (self.ctm, self.textstate, self.graphicstate) = state + self.device.set_ctm(self.ctm) + return + + def do_q(self) -> None: + """Save graphics state""" + self.gstack.append(self.get_current_state()) + return + + def do_Q(self) -> None: + """Restore graphics state""" + if self.gstack: + self.set_current_state(self.gstack.pop()) + return + + def do_cm( + self, + a1: PDFStackT, + b1: PDFStackT, + c1: PDFStackT, + d1: PDFStackT, + e1: PDFStackT, + f1: PDFStackT, + ) -> None: + """Concatenate matrix to current transformation matrix""" + self.ctm = mult_matrix(cast(Matrix, (a1, b1, c1, d1, e1, f1)), self.ctm) + self.device.set_ctm(self.ctm) + return + + def do_w(self, linewidth: PDFStackT) -> None: + """Set line width""" + self.graphicstate.linewidth = cast(float, linewidth) + return + + def do_J(self, linecap: PDFStackT) -> None: + """Set line cap style""" + self.graphicstate.linecap = linecap + return + + def do_j(self, linejoin: PDFStackT) -> None: + """Set line join style""" + self.graphicstate.linejoin = linejoin + return + + def do_M(self, miterlimit: PDFStackT) -> None: + """Set miter limit""" + self.graphicstate.miterlimit = miterlimit + return + + def do_d(self, dash: PDFStackT, phase: PDFStackT) -> None: + """Set line dash pattern""" + self.graphicstate.dash = (dash, phase) + return + + def do_ri(self, intent: PDFStackT) -> None: + """Set color rendering intent""" + self.graphicstate.intent = intent + return + + def do_i(self, flatness: PDFStackT) -> None: + """Set flatness tolerance""" + self.graphicstate.flatness = flatness + return + + def do_gs(self, name: PDFStackT) -> None: + """Set parameters from graphics state parameter dictionary""" + # todo + return + + def do_m(self, x: PDFStackT, y: PDFStackT) -> None: + """Begin new subpath""" + self.curpath.append(("m", cast(float, x), cast(float, y))) + return + + def do_l(self, x: PDFStackT, y: PDFStackT) -> None: + """Append straight line segment to path""" + self.curpath.append(("l", cast(float, x), cast(float, y))) + return + + def do_c( + self, + x1: PDFStackT, + y1: PDFStackT, + x2: PDFStackT, + y2: PDFStackT, + x3: PDFStackT, + y3: PDFStackT, + ) -> None: + """Append curved segment to path (three control points)""" + self.curpath.append( + ( + "c", + cast(float, x1), + cast(float, y1), + cast(float, x2), + cast(float, y2), + cast(float, x3), + cast(float, y3), + ) + ) + return + + def do_v(self, x2: PDFStackT, y2: PDFStackT, x3: PDFStackT, y3: PDFStackT) -> None: + """Append curved segment to path (initial point replicated)""" + self.curpath.append( + ("v", cast(float, x2), cast(float, y2), cast(float, x3), cast(float, y3)) + ) + return + + def do_y(self, x1: PDFStackT, y1: PDFStackT, x3: PDFStackT, y3: PDFStackT) -> None: + """Append curved segment to path (final point replicated)""" + self.curpath.append( + ("y", cast(float, x1), cast(float, y1), cast(float, x3), cast(float, y3)) + ) + return + + def do_h(self) -> None: + """Close subpath""" + self.curpath.append(("h",)) + return + + def do_re(self, x: PDFStackT, y: PDFStackT, w: PDFStackT, h: PDFStackT) -> None: + """Append rectangle to path""" + x = cast(float, x) + y = cast(float, y) + w = cast(float, w) + h = cast(float, h) + self.curpath.append(("m", x, y)) + self.curpath.append(("l", x + w, y)) + self.curpath.append(("l", x + w, y + h)) + self.curpath.append(("l", x, y + h)) + self.curpath.append(("h",)) + return + + def do_S(self) -> None: + """Stroke path""" + self.device.paint_path(self.graphicstate, True, False, False, self.curpath) + self.curpath = [] + return + + def do_s(self) -> None: + """Close and stroke path""" + self.do_h() + self.do_S() + return + + def do_f(self) -> None: + """Fill path using nonzero winding number rule""" + self.device.paint_path(self.graphicstate, False, True, False, self.curpath) + self.curpath = [] + return + + def do_F(self) -> None: + """Fill path using nonzero winding number rule (obsolete)""" + return self.do_f() + + def do_f_a(self) -> None: + """Fill path using even-odd rule""" + self.device.paint_path(self.graphicstate, False, True, True, self.curpath) + self.curpath = [] + return + + def do_B(self) -> None: + """Fill and stroke path using nonzero winding number rule""" + self.device.paint_path(self.graphicstate, True, True, False, self.curpath) + self.curpath = [] + return + + def do_B_a(self) -> None: + """Fill and stroke path using even-odd rule""" + self.device.paint_path(self.graphicstate, True, True, True, self.curpath) + self.curpath = [] + return + + def do_b(self) -> None: + """Close, fill, and stroke path using nonzero winding number rule""" + self.do_h() + self.do_B() + return + + def do_b_a(self) -> None: + """Close, fill, and stroke path using even-odd rule""" + self.do_h() + self.do_B_a() + return + + def do_n(self) -> None: + """End path without filling or stroking""" + self.curpath = [] + return + + def do_W(self) -> None: + """Set clipping path using nonzero winding number rule""" + return + + def do_W_a(self) -> None: + """Set clipping path using even-odd rule""" + return + + def do_CS(self, name: PDFStackT) -> None: + """Set color space for stroking operations + + Introduced in PDF 1.1 + """ + try: + self.scs = self.csmap[literal_name(name)] + except KeyError: + if settings.STRICT: + raise PDFInterpreterError("Undefined ColorSpace: %r" % name) + return + + def do_cs(self, name: PDFStackT) -> None: + """Set color space for nonstroking operations""" + try: + self.ncs = self.csmap[literal_name(name)] + except KeyError: + if settings.STRICT: + raise PDFInterpreterError("Undefined ColorSpace: %r" % name) + return + + def do_G(self, gray: PDFStackT) -> None: + """Set gray level for stroking operations""" + self.graphicstate.scolor = cast(float, gray) + self.scs = self.csmap["DeviceGray"] + return + + def do_g(self, gray: PDFStackT) -> None: + """Set gray level for nonstroking operations""" + self.graphicstate.ncolor = cast(float, gray) + self.ncs = self.csmap["DeviceGray"] + return + + def do_RG(self, r: PDFStackT, g: PDFStackT, b: PDFStackT) -> None: + """Set RGB color for stroking operations""" + self.graphicstate.scolor = (cast(float, r), cast(float, g), cast(float, b)) + self.scs = self.csmap["DeviceRGB"] + return + + def do_rg(self, r: PDFStackT, g: PDFStackT, b: PDFStackT) -> None: + """Set RGB color for nonstroking operations""" + self.graphicstate.ncolor = (cast(float, r), cast(float, g), cast(float, b)) + self.ncs = self.csmap["DeviceRGB"] + return + + def do_K(self, c: PDFStackT, m: PDFStackT, y: PDFStackT, k: PDFStackT) -> None: + """Set CMYK color for stroking operations""" + self.graphicstate.scolor = ( + cast(float, c), + cast(float, m), + cast(float, y), + cast(float, k), + ) + self.scs = self.csmap["DeviceCMYK"] + return + + def do_k(self, c: PDFStackT, m: PDFStackT, y: PDFStackT, k: PDFStackT) -> None: + """Set CMYK color for nonstroking operations""" + self.graphicstate.ncolor = ( + cast(float, c), + cast(float, m), + cast(float, y), + cast(float, k), + ) + self.ncs = self.csmap["DeviceCMYK"] + return + + def do_SCN(self) -> None: + """Set color for stroking operations.""" + if self.scs: + n = self.scs.ncomponents + else: + if settings.STRICT: + raise PDFInterpreterError("No colorspace specified!") + n = 1 + self.graphicstate.scolor = cast(Color, self.pop(n)) + return + + def do_scn(self) -> None: + """Set color for nonstroking operations""" + if self.ncs: + n = self.ncs.ncomponents + else: + if settings.STRICT: + raise PDFInterpreterError("No colorspace specified!") + n = 1 + self.graphicstate.ncolor = cast(Color, self.pop(n)) + return + + def do_SC(self) -> None: + """Set color for stroking operations""" + self.do_SCN() + return + + def do_sc(self) -> None: + """Set color for nonstroking operations""" + self.do_scn() + return + + def do_sh(self, name: object) -> None: + """Paint area defined by shading pattern""" + return + + def do_BT(self) -> None: + """Begin text object + + Initializing the text matrix, Tm, and the text line matrix, Tlm, to + the identity matrix. Text objects cannot be nested; a second BT cannot + appear before an ET. + """ + self.textstate.reset() + return + + def do_ET(self) -> None: + """End a text object""" + return + + def do_BX(self) -> None: + """Begin compatibility section""" + return + + def do_EX(self) -> None: + """End compatibility section""" + return + + def do_MP(self, tag: PDFStackT) -> None: + """Define marked-content point""" + self.device.do_tag(cast(PSLiteral, tag)) + return + + def do_DP(self, tag: PDFStackT, props: PDFStackT) -> None: + """Define marked-content point with property list""" + self.device.do_tag(cast(PSLiteral, tag), props) + return + + def do_BMC(self, tag: PDFStackT) -> None: + """Begin marked-content sequence""" + self.device.begin_tag(cast(PSLiteral, tag)) + return + + def do_BDC(self, tag: PDFStackT, props: PDFStackT) -> None: + """Begin marked-content sequence with property list""" + self.device.begin_tag(cast(PSLiteral, tag), props) + return + + def do_EMC(self) -> None: + """End marked-content sequence""" + self.device.end_tag() + return + + def do_Tc(self, space: PDFStackT) -> None: + """Set character spacing. + + Character spacing is used by the Tj, TJ, and ' operators. + + :param space: a number expressed in unscaled text space units. + """ + self.textstate.charspace = cast(float, space) + return + + def do_Tw(self, space: PDFStackT) -> None: + """Set the word spacing. + + Word spacing is used by the Tj, TJ, and ' operators. + + :param space: a number expressed in unscaled text space units + """ + self.textstate.wordspace = cast(float, space) + return + + def do_Tz(self, scale: PDFStackT) -> None: + """Set the horizontal scaling. + + :param scale: is a number specifying the percentage of the normal width + """ + self.textstate.scaling = cast(float, scale) + return + + def do_TL(self, leading: PDFStackT) -> None: + """Set the text leading. + + Text leading is used only by the T*, ', and " operators. + + :param leading: a number expressed in unscaled text space units + """ + self.textstate.leading = -cast(float, leading) + return + + def do_Tf(self, fontid: PDFStackT, fontsize: PDFStackT) -> None: + """Set the text font + + :param fontid: the name of a font resource in the Font subdictionary + of the current resource dictionary + :param fontsize: size is a number representing a scale factor. + """ + try: + self.textstate.font = self.fontmap[literal_name(fontid)] + except KeyError: + if settings.STRICT: + raise PDFInterpreterError("Undefined Font id: %r" % fontid) + self.textstate.font = self.rsrcmgr.get_font(None, {}) + self.textstate.fontsize = cast(float, fontsize) + return + + def do_Tr(self, render: PDFStackT) -> None: + """Set the text rendering mode""" + self.textstate.render = cast(int, render) + return + + def do_Ts(self, rise: PDFStackT) -> None: + """Set the text rise + + :param rise: a number expressed in unscaled text space units + """ + self.textstate.rise = cast(float, rise) + return + + def do_Td(self, tx: PDFStackT, ty: PDFStackT) -> None: + """Move text position""" + tx = cast(float, tx) + ty = cast(float, ty) + (a, b, c, d, e, f) = self.textstate.matrix + self.textstate.matrix = (a, b, c, d, tx * a + ty * c + e, tx * b + ty * d + f) + self.textstate.linematrix = (0, 0) + return + + def do_TD(self, tx: PDFStackT, ty: PDFStackT) -> None: + """Move text position and set leading""" + tx = cast(float, tx) + ty = cast(float, ty) + (a, b, c, d, e, f) = self.textstate.matrix + self.textstate.matrix = (a, b, c, d, tx * a + ty * c + e, tx * b + ty * d + f) + self.textstate.leading = ty + self.textstate.linematrix = (0, 0) + return + + def do_Tm( + self, + a: PDFStackT, + b: PDFStackT, + c: PDFStackT, + d: PDFStackT, + e: PDFStackT, + f: PDFStackT, + ) -> None: + """Set text matrix and text line matrix""" + self.textstate.matrix = cast(Matrix, (a, b, c, d, e, f)) + self.textstate.linematrix = (0, 0) + return + + def do_T_a(self) -> None: + """Move to start of next text line""" + (a, b, c, d, e, f) = self.textstate.matrix + self.textstate.matrix = ( + a, + b, + c, + d, + self.textstate.leading * c + e, + self.textstate.leading * d + f, + ) + self.textstate.linematrix = (0, 0) + return + + def do_TJ(self, seq: PDFStackT) -> None: + """Show text, allowing individual glyph positioning""" + if self.textstate.font is None: + if settings.STRICT: + raise PDFInterpreterError("No font specified!") + return + assert self.ncs is not None + self.device.render_string( + self.textstate, cast(PDFTextSeq, seq), self.ncs, self.graphicstate.copy() + ) + return + + def do_Tj(self, s: PDFStackT) -> None: + """Show text""" + self.do_TJ([s]) + return + + def do__q(self, s: PDFStackT) -> None: + """Move to next line and show text + + The ' (single quote) operator. + """ + self.do_T_a() + self.do_TJ([s]) + return + + def do__w(self, aw: PDFStackT, ac: PDFStackT, s: PDFStackT) -> None: + """Set word and character spacing, move to next line, and show text + + The " (double quote) operator. + """ + self.do_Tw(aw) + self.do_Tc(ac) + self.do_TJ([s]) + return + + def do_BI(self) -> None: + """Begin inline image object""" + return + + def do_ID(self) -> None: + """Begin inline image data""" + return + + def do_EI(self, obj: PDFStackT) -> None: + """End inline image object""" + if isinstance(obj, PDFStream) and "W" in obj and "H" in obj: + iobjid = str(id(obj)) + self.device.begin_figure(iobjid, (0, 0, 1, 1), MATRIX_IDENTITY) + self.device.render_image(iobjid, obj) + self.device.end_figure(iobjid) + return + + def do_Do(self, xobjid_arg: PDFStackT) -> None: + """Invoke named XObject""" + xobjid = cast(str, literal_name(xobjid_arg)) + try: + xobj = stream_value(self.xobjmap[xobjid]) + except KeyError: + if settings.STRICT: + raise PDFInterpreterError("Undefined xobject id: %r" % xobjid) + return + log.debug("Processing xobj: %r", xobj) + subtype = xobj.get("Subtype") + if subtype is LITERAL_FORM and "BBox" in xobj: + interpreter = self.dup() + bbox = cast(Rect, list_value(xobj["BBox"])) + matrix = cast(Matrix, list_value(xobj.get("Matrix", MATRIX_IDENTITY))) + # According to PDF reference 1.7 section 4.9.1, XObjects in + # earlier PDFs (prior to v1.2) use the page's Resources entry + # instead of having their own Resources entry. + xobjres = xobj.get("Resources") + if xobjres: + resources = dict_value(xobjres) + else: + resources = self.resources.copy() + self.device.begin_figure(xobjid, bbox, matrix) + interpreter.render_contents( + resources, [xobj], ctm=mult_matrix(matrix, self.ctm) + ) + self.device.end_figure(xobjid) + elif subtype is LITERAL_IMAGE and "Width" in xobj and "Height" in xobj: + self.device.begin_figure(xobjid, (0, 0, 1, 1), MATRIX_IDENTITY) + self.device.render_image(xobjid, xobj) + self.device.end_figure(xobjid) + else: + # unsupported xobject type. + pass + return + + def process_page(self, page: PDFPage) -> None: + log.debug("Processing page: %r", page) + (x0, y0, x1, y1) = page.mediabox + if page.rotate == 90: + ctm = (0, -1, 1, 0, -y0, x1) + elif page.rotate == 180: + ctm = (-1, 0, 0, -1, x1, y1) + elif page.rotate == 270: + ctm = (0, 1, -1, 0, y1, -x0) + else: + ctm = (1, 0, 0, 1, -x0, -y0) + self.device.begin_page(page, ctm) + self.render_contents(page.resources, page.contents, ctm=ctm) + self.device.end_page(page) + return + + def render_contents( + self, + resources: Dict[object, object], + streams: Sequence[object], + ctm: Matrix = MATRIX_IDENTITY, + ) -> None: + """Render the content streams. + + This method may be called recursively. + """ + log.debug( + "render_contents: resources=%r, streams=%r, ctm=%r", resources, streams, ctm + ) + self.init_resources(resources) + self.init_state(ctm) + self.execute(list_value(streams)) + return + + def execute(self, streams: Sequence[object]) -> None: + try: + parser = PDFContentParser(streams) + except PSEOF: + # empty page + return + while 1: + try: + (_, obj) = parser.nextobject() + except PSEOF: + break + if isinstance(obj, PSKeyword): + name = keyword_name(obj) + method = "do_%s" % name.replace("*", "_a").replace('"', "_w").replace( + "'", "_q" + ) + if hasattr(self, method): + func = getattr(self, method) + nargs = func.__code__.co_argcount - 1 + if nargs: + args = self.pop(nargs) + log.debug("exec: %s %r", name, args) + if len(args) == nargs: + func(*args) + else: + log.debug("exec: %s", name) + func() + else: + if settings.STRICT: + error_msg = "Unknown operator: %r" % name + raise PDFInterpreterError(error_msg) + else: + self.push(obj) + return diff --git a/templates/skills/file_manager/dependencies/pdfminer/pdfpage.py b/templates/skills/file_manager/dependencies/pdfminer/pdfpage.py new file mode 100644 index 00000000..7df4cf6b --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/pdfpage.py @@ -0,0 +1,177 @@ +import itertools +import logging +from typing import BinaryIO, Container, Dict, Iterator, List, Optional, Tuple, Any + +from pdfminer.utils import Rect +from . import settings +from .pdfdocument import PDFDocument, PDFTextExtractionNotAllowed, PDFNoPageLabels +from .pdfparser import PDFParser +from .pdftypes import dict_value +from .pdfexceptions import PDFObjectNotFound +from .pdftypes import int_value +from .pdftypes import list_value +from .pdftypes import resolve1 +from .psparser import LIT + +log = logging.getLogger(__name__) + +# some predefined literals and keywords. +LITERAL_PAGE = LIT("Page") +LITERAL_PAGES = LIT("Pages") + + +class PDFPage: + """An object that holds the information about a page. + + A PDFPage object is merely a convenience class that has a set + of keys and values, which describe the properties of a page + and point to its contents. + + Attributes: + doc: a PDFDocument object. + pageid: any Python object that can uniquely identify the page. + attrs: a dictionary of page attributes. + contents: a list of PDFStream objects that represents the page content. + lastmod: the last modified time of the page. + resources: a dictionary of resources used by the page. + mediabox: the physical size of the page. + cropbox: the crop rectangle of the page. + rotate: the page rotation (in degree). + annots: the page annotations. + beads: a chain that represents natural reading order. + label: the page's label (typically, the logical page number). + """ + + def __init__( + self, doc: PDFDocument, pageid: object, attrs: object, label: Optional[str] + ) -> None: + """Initialize a page object. + + doc: a PDFDocument object. + pageid: any Python object that can uniquely identify the page. + attrs: a dictionary of page attributes. + label: page label string. + """ + self.doc = doc + self.pageid = pageid + self.attrs = dict_value(attrs) + self.label = label + self.lastmod = resolve1(self.attrs.get("LastModified")) + self.resources: Dict[object, object] = resolve1( + self.attrs.get("Resources", dict()) + ) + mediabox_params: List[Any] = [ + resolve1(mediabox_param) for mediabox_param in self.attrs["MediaBox"] + ] + self.mediabox: Rect = resolve1(mediabox_params) + if "CropBox" in self.attrs: + self.cropbox: Rect = resolve1(self.attrs["CropBox"]) + else: + self.cropbox = self.mediabox + self.rotate = (int_value(self.attrs.get("Rotate", 0)) + 360) % 360 + self.annots = self.attrs.get("Annots") + self.beads = self.attrs.get("B") + if "Contents" in self.attrs: + contents = resolve1(self.attrs["Contents"]) + else: + contents = [] + if not isinstance(contents, list): + contents = [contents] + self.contents: List[object] = contents + + def __repr__(self) -> str: + return "".format( + self.resources, self.mediabox + ) + + INHERITABLE_ATTRS = {"Resources", "MediaBox", "CropBox", "Rotate"} + + @classmethod + def create_pages(cls, document: PDFDocument) -> Iterator["PDFPage"]: + def search( + obj: object, parent: Dict[str, object] + ) -> Iterator[Tuple[int, Dict[object, Dict[object, object]]]]: + if isinstance(obj, int): + objid = obj + tree = dict_value(document.getobj(objid)).copy() + else: + # This looks broken. obj.objid means obj could be either + # PDFObjRef or PDFStream, but neither is valid for dict_value. + objid = obj.objid # type: ignore[attr-defined] + tree = dict_value(obj).copy() + for (k, v) in parent.items(): + if k in cls.INHERITABLE_ATTRS and k not in tree: + tree[k] = v + + tree_type = tree.get("Type") + if tree_type is None and not settings.STRICT: # See #64 + tree_type = tree.get("type") + + if tree_type is LITERAL_PAGES and "Kids" in tree: + log.debug("Pages: Kids=%r", tree["Kids"]) + for c in list_value(tree["Kids"]): + yield from search(c, tree) + elif tree_type is LITERAL_PAGE: + log.debug("Page: %r", tree) + yield (objid, tree) + + try: + page_labels: Iterator[Optional[str]] = document.get_page_labels() + except PDFNoPageLabels: + page_labels = itertools.repeat(None) + + pages = False + if "Pages" in document.catalog: + objects = search(document.catalog["Pages"], document.catalog) + for (objid, tree) in objects: + yield cls(document, objid, tree, next(page_labels)) + pages = True + if not pages: + # fallback when /Pages is missing. + for xref in document.xrefs: + for objid in xref.get_objids(): + try: + obj = document.getobj(objid) + if isinstance(obj, dict) and obj.get("Type") is LITERAL_PAGE: + yield cls(document, objid, obj, next(page_labels)) + except PDFObjectNotFound: + pass + return + + @classmethod + def get_pages( + cls, + fp: BinaryIO, + pagenos: Optional[Container[int]] = None, + maxpages: int = 0, + password: str = "", + caching: bool = True, + check_extractable: bool = False, + ) -> Iterator["PDFPage"]: + # Create a PDF parser object associated with the file object. + parser = PDFParser(fp) + # Create a PDF document object that stores the document structure. + doc = PDFDocument(parser, password=password, caching=caching) + # Check if the document allows text extraction. + # If not, warn the user and proceed. + if not doc.is_extractable: + if check_extractable: + error_msg = "Text extraction is not allowed: %r" % fp + raise PDFTextExtractionNotAllowed(error_msg) + else: + warning_msg = ( + "The PDF %r contains a metadata field " + "indicating that it should not allow " + "text extraction. Ignoring this field " + "and proceeding. Use the check_extractable " + "if you want to raise an error in this case" % fp + ) + log.warning(warning_msg) + # Process each page contained in the document. + for (pageno, page) in enumerate(cls.create_pages(doc)): + if pagenos and (pageno not in pagenos): + continue + yield page + if maxpages and maxpages <= pageno + 1: + break + return diff --git a/templates/skills/file_manager/dependencies/pdfminer/pdfparser.py b/templates/skills/file_manager/dependencies/pdfminer/pdfparser.py new file mode 100644 index 00000000..b4f2d572 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/pdfparser.py @@ -0,0 +1,175 @@ +import logging +from io import BytesIO +from typing import BinaryIO, TYPE_CHECKING, Optional, Union + +from . import settings +from .pdftypes import PDFObjRef +from .pdfexceptions import PDFException +from .pdftypes import PDFStream +from .pdftypes import dict_value +from .pdftypes import int_value +from .psparser import KWD +from .psexceptions import PSEOF, PSSyntaxError +from .psparser import PSKeyword +from .psparser import PSStackParser + +if TYPE_CHECKING: + from .pdfdocument import PDFDocument + +log = logging.getLogger(__name__) + + +class PDFSyntaxError(PDFException): + pass + + +# PDFParser stack holds all the base types plus PDFStream, PDFObjRef, and None +class PDFParser(PSStackParser[Union[PSKeyword, PDFStream, PDFObjRef, None]]): + """ + PDFParser fetch PDF objects from a file stream. + It can handle indirect references by referring to + a PDF document set by set_document method. + It also reads XRefs at the end of every PDF file. + + Typical usage: + parser = PDFParser(fp) + parser.read_xref() + parser.read_xref(fallback=True) # optional + parser.set_document(doc) + parser.seek(offset) + parser.nextobject() + + """ + + def __init__(self, fp: BinaryIO) -> None: + PSStackParser.__init__(self, fp) + self.doc: Optional["PDFDocument"] = None + self.fallback = False + + def set_document(self, doc: "PDFDocument") -> None: + """Associates the parser with a PDFDocument object.""" + self.doc = doc + + KEYWORD_R = KWD(b"R") + KEYWORD_NULL = KWD(b"null") + KEYWORD_ENDOBJ = KWD(b"endobj") + KEYWORD_STREAM = KWD(b"stream") + KEYWORD_XREF = KWD(b"xref") + KEYWORD_STARTXREF = KWD(b"startxref") + + def do_keyword(self, pos: int, token: PSKeyword) -> None: + """Handles PDF-related keywords.""" + + if token in (self.KEYWORD_XREF, self.KEYWORD_STARTXREF): + self.add_results(*self.pop(1)) + + elif token is self.KEYWORD_ENDOBJ: + self.add_results(*self.pop(4)) + + elif token is self.KEYWORD_NULL: + # null object + self.push((pos, None)) + + elif token is self.KEYWORD_R: + # reference to indirect object + if len(self.curstack) >= 2: + try: + ((_, objid), (_, genno)) = self.pop(2) + (objid, genno) = (int(objid), int(genno)) # type: ignore[arg-type] + assert self.doc is not None + obj = PDFObjRef(self.doc, objid, genno) + self.push((pos, obj)) + except PSSyntaxError: + pass + elif token is self.KEYWORD_STREAM: + # stream object + ((_, dic),) = self.pop(1) + dic = dict_value(dic) + objlen = 0 + if not self.fallback: + try: + objlen = int_value(dic["Length"]) + except KeyError: + if settings.STRICT: + raise PDFSyntaxError("/Length is undefined: %r" % dic) + self.seek(pos) + try: + (_, line) = self.nextline() # 'stream' + except PSEOF: + if settings.STRICT: + raise PDFSyntaxError("Unexpected EOF") + return + pos += len(line) + self.fp.seek(pos) + data = bytearray(self.fp.read(objlen)) + self.seek(pos + objlen) + while 1: + try: + (linepos, line) = self.nextline() + except PSEOF: + if settings.STRICT: + raise PDFSyntaxError("Unexpected EOF") + break + if b"endstream" in line: + i = line.index(b"endstream") + objlen += i + if self.fallback: + data += line[:i] + break + objlen += len(line) + if self.fallback: + data += line + self.seek(pos + objlen) + # XXX limit objlen not to exceed object boundary + log.debug( + "Stream: pos=%d, objlen=%d, dic=%r, data=%r...", + pos, + objlen, + dic, + data[:10], + ) + assert self.doc is not None + stream = PDFStream(dic, bytes(data), self.doc.decipher) + self.push((pos, stream)) + + else: + # others + self.push((pos, token)) + + +class PDFStreamParser(PDFParser): + """ + PDFStreamParser is used to parse PDF content streams + that is contained in each page and has instructions + for rendering the page. A reference to a PDF document is + needed because a PDF content stream can also have + indirect references to other objects in the same document. + """ + + def __init__(self, data: bytes) -> None: + PDFParser.__init__(self, BytesIO(data)) + + def flush(self) -> None: + self.add_results(*self.popall()) + + KEYWORD_OBJ = KWD(b"obj") + + def do_keyword(self, pos: int, token: PSKeyword) -> None: + if token is self.KEYWORD_R: + # reference to indirect object + try: + ((_, objid), (_, genno)) = self.pop(2) + (objid, genno) = (int(objid), int(genno)) # type: ignore[arg-type] + obj = PDFObjRef(self.doc, objid, genno) + self.push((pos, obj)) + except PSSyntaxError: + pass + return + elif token in (self.KEYWORD_OBJ, self.KEYWORD_ENDOBJ): + if settings.STRICT: + # See PDF Spec 3.4.6: Only the object values are stored in the + # stream; the obj and endobj keywords are not used. + raise PDFSyntaxError("Keyword endobj found in stream") + return + # others + self.push((pos, token)) diff --git a/templates/skills/file_manager/dependencies/pdfminer/pdftypes.py b/templates/skills/file_manager/dependencies/pdfminer/pdftypes.py new file mode 100644 index 00000000..1fb5be38 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/pdftypes.py @@ -0,0 +1,375 @@ +import io +import logging +import zlib +from typing import ( + TYPE_CHECKING, + Any, + Dict, + Iterable, + Optional, + Protocol, + Union, + List, + Tuple, + cast, +) + +from . import settings, pdfexceptions +from .ascii85 import ascii85decode +from .ascii85 import asciihexdecode +from .ccitt import ccittfaxdecode +from .lzw import lzwdecode +from .psparser import LIT, PSObject +from .runlength import rldecode +from .utils import apply_png_predictor + +if TYPE_CHECKING: + from .pdfdocument import PDFDocument + +logger = logging.getLogger(__name__) + +LITERAL_CRYPT = LIT("Crypt") + +# Abbreviation of Filter names in PDF 4.8.6. "Inline Images" +LITERALS_FLATE_DECODE = (LIT("FlateDecode"), LIT("Fl")) +LITERALS_LZW_DECODE = (LIT("LZWDecode"), LIT("LZW")) +LITERALS_ASCII85_DECODE = (LIT("ASCII85Decode"), LIT("A85")) +LITERALS_ASCIIHEX_DECODE = (LIT("ASCIIHexDecode"), LIT("AHx")) +LITERALS_RUNLENGTH_DECODE = (LIT("RunLengthDecode"), LIT("RL")) +LITERALS_CCITTFAX_DECODE = (LIT("CCITTFaxDecode"), LIT("CCF")) +LITERALS_DCT_DECODE = (LIT("DCTDecode"), LIT("DCT")) +LITERALS_JBIG2_DECODE = (LIT("JBIG2Decode"),) +LITERALS_JPX_DECODE = (LIT("JPXDecode"),) + + +class DecipherCallable(Protocol): + """Fully typed a decipher callback, with optional parameter.""" + + def __call__( + self, + objid: int, + genno: int, + data: bytes, + attrs: Optional[Dict[str, Any]] = None, + ) -> bytes: + raise NotImplementedError + + +class PDFObject(PSObject): + pass + + +# Adding aliases for these exceptions for backwards compatibility +PDFException = pdfexceptions.PDFException +PDFTypeError = pdfexceptions.PDFTypeError +PDFValueError = pdfexceptions.PDFValueError +PDFObjectNotFound = pdfexceptions.PDFObjectNotFound +PDFNotImplementedError = pdfexceptions.PDFNotImplementedError + + +class PDFObjRef(PDFObject): + def __init__(self, doc: Optional["PDFDocument"], objid: int, _: object) -> None: + if objid == 0: + if settings.STRICT: + raise PDFValueError("PDF object id cannot be 0.") + self.doc = doc + self.objid = objid + + def __repr__(self) -> str: + return "" % (self.objid) + + def resolve(self, default: object = None) -> Any: + assert self.doc is not None + try: + return self.doc.getobj(self.objid) + except PDFObjectNotFound: + return default + + +def resolve1(x: object, default: object = None) -> Any: + """Resolves an object. + + If this is an array or dictionary, it may still contains + some indirect objects inside. + """ + while isinstance(x, PDFObjRef): + x = x.resolve(default=default) + return x + + +def resolve_all(x: object, default: object = None) -> Any: + """Recursively resolves the given object and all the internals. + + Make sure there is no indirect reference within the nested object. + This procedure might be slow. + """ + while isinstance(x, PDFObjRef): + x = x.resolve(default=default) + if isinstance(x, list): + x = [resolve_all(v, default=default) for v in x] + elif isinstance(x, dict): + for (k, v) in x.items(): + x[k] = resolve_all(v, default=default) + return x + + +def decipher_all(decipher: DecipherCallable, objid: int, genno: int, x: object) -> Any: + """Recursively deciphers the given object.""" + if isinstance(x, bytes): + if len(x) == 0: + return x + return decipher(objid, genno, x) + if isinstance(x, list): + x = [decipher_all(decipher, objid, genno, v) for v in x] + elif isinstance(x, dict): + for (k, v) in x.items(): + x[k] = decipher_all(decipher, objid, genno, v) + return x + + +def int_value(x: object) -> int: + x = resolve1(x) + if not isinstance(x, int): + if settings.STRICT: + raise PDFTypeError("Integer required: %r" % x) + return 0 + return x + + +def float_value(x: object) -> float: + x = resolve1(x) + if not isinstance(x, float): + if settings.STRICT: + raise PDFTypeError("Float required: %r" % x) + return 0.0 + return x + + +def num_value(x: object) -> float: + x = resolve1(x) + if not isinstance(x, (int, float)): # == utils.isnumber(x) + if settings.STRICT: + raise PDFTypeError("Int or Float required: %r" % x) + return 0 + return x + + +def uint_value(x: object, n_bits: int) -> int: + """Resolve number and interpret it as a two's-complement unsigned number""" + xi = int_value(x) + if xi > 0: + return xi + else: + return xi + cast(int, 2**n_bits) + + +def str_value(x: object) -> bytes: + x = resolve1(x) + if not isinstance(x, bytes): + if settings.STRICT: + raise PDFTypeError("String required: %r" % x) + return b"" + return x + + +def list_value(x: object) -> Union[List[Any], Tuple[Any, ...]]: + x = resolve1(x) + if not isinstance(x, (list, tuple)): + if settings.STRICT: + raise PDFTypeError("List required: %r" % x) + return [] + return x + + +def dict_value(x: object) -> Dict[Any, Any]: + x = resolve1(x) + if not isinstance(x, dict): + if settings.STRICT: + logger.error("PDFTypeError : Dict required: %r", x) + raise PDFTypeError("Dict required: %r" % x) + return {} + return x + + +def stream_value(x: object) -> "PDFStream": + x = resolve1(x) + if not isinstance(x, PDFStream): + if settings.STRICT: + raise PDFTypeError("PDFStream required: %r" % x) + return PDFStream({}, b"") + return x + + +def decompress_corrupted(data: bytes) -> bytes: + """Called on some data that can't be properly decoded because of CRC checksum + error. Attempt to decode it skipping the CRC. + """ + d = zlib.decompressobj() + f = io.BytesIO(data) + result_str = b"" + buffer = f.read(1) + i = 0 + try: + while buffer: + result_str += d.decompress(buffer) + buffer = f.read(1) + i += 1 + except zlib.error: + # Let the error propagates if we're not yet in the CRC checksum + if i < len(data) - 3: + logger.warning("Data-loss while decompressing corrupted data") + return result_str + + +class PDFStream(PDFObject): + def __init__( + self, + attrs: Dict[str, Any], + rawdata: bytes, + decipher: Optional[DecipherCallable] = None, + ) -> None: + assert isinstance(attrs, dict), str(type(attrs)) + self.attrs = attrs + self.rawdata: Optional[bytes] = rawdata + self.decipher = decipher + self.data: Optional[bytes] = None + self.objid: Optional[int] = None + self.genno: Optional[int] = None + + def set_objid(self, objid: int, genno: int) -> None: + self.objid = objid + self.genno = genno + + def __repr__(self) -> str: + if self.data is None: + assert self.rawdata is not None + return "" % ( + self.objid, + len(self.rawdata), + self.attrs, + ) + else: + assert self.data is not None + return "" % ( + self.objid, + len(self.data), + self.attrs, + ) + + def __contains__(self, name: object) -> bool: + return name in self.attrs + + def __getitem__(self, name: str) -> Any: + return self.attrs[name] + + def get(self, name: str, default: object = None) -> Any: + return self.attrs.get(name, default) + + def get_any(self, names: Iterable[str], default: object = None) -> Any: + for name in names: + if name in self.attrs: + return self.attrs[name] + return default + + def get_filters(self) -> List[Tuple[Any, Any]]: + filters = self.get_any(("F", "Filter")) + params = self.get_any(("DP", "DecodeParms", "FDecodeParms"), {}) + if not filters: + return [] + if not isinstance(filters, list): + filters = [filters] + if not isinstance(params, list): + # Make sure the parameters list is the same as filters. + params = [params] * len(filters) + if settings.STRICT and len(params) != len(filters): + raise PDFException("Parameters len filter mismatch") + + resolved_filters = [resolve1(f) for f in filters] + resolved_params = [resolve1(param) for param in params] + return list(zip(resolved_filters, resolved_params)) + + def decode(self) -> None: + assert self.data is None and self.rawdata is not None, str( + (self.data, self.rawdata) + ) + data = self.rawdata + if self.decipher: + # Handle encryption + assert self.objid is not None + assert self.genno is not None + data = self.decipher(self.objid, self.genno, data, self.attrs) + filters = self.get_filters() + if not filters: + self.data = data + self.rawdata = None + return + for (f, params) in filters: + if f in LITERALS_FLATE_DECODE: + # will get errors if the document is encrypted. + try: + data = zlib.decompress(data) + + except zlib.error as e: + if settings.STRICT: + error_msg = f"Invalid zlib bytes: {e!r}, {data!r}" + raise PDFException(error_msg) + + try: + data = decompress_corrupted(data) + except zlib.error: + data = b"" + + elif f in LITERALS_LZW_DECODE: + data = lzwdecode(data) + elif f in LITERALS_ASCII85_DECODE: + data = ascii85decode(data) + elif f in LITERALS_ASCIIHEX_DECODE: + data = asciihexdecode(data) + elif f in LITERALS_RUNLENGTH_DECODE: + data = rldecode(data) + elif f in LITERALS_CCITTFAX_DECODE: + data = ccittfaxdecode(data, params) + elif f in LITERALS_DCT_DECODE: + # This is probably a JPG stream + # it does not need to be decoded twice. + # Just return the stream to the user. + pass + elif f in LITERALS_JBIG2_DECODE: + pass + elif f in LITERALS_JPX_DECODE: + pass + elif f == LITERAL_CRYPT: + # not yet.. + raise PDFNotImplementedError("/Crypt filter is unsupported") + else: + raise PDFNotImplementedError("Unsupported filter: %r" % f) + # apply predictors + if params and "Predictor" in params: + pred = int_value(params["Predictor"]) + if pred == 1: + # no predictor + pass + elif 10 <= pred: + # PNG predictor + colors = int_value(params.get("Colors", 1)) + columns = int_value(params.get("Columns", 1)) + raw_bits_per_component = params.get("BitsPerComponent", 8) + bitspercomponent = int_value(raw_bits_per_component) + data = apply_png_predictor( + pred, colors, columns, bitspercomponent, data + ) + else: + error_msg = "Unsupported predictor: %r" % pred + raise PDFNotImplementedError(error_msg) + self.data = data + self.rawdata = None + return + + def get_data(self) -> bytes: + if self.data is None: + self.decode() + assert self.data is not None + return self.data + + def get_rawdata(self) -> Optional[bytes]: + return self.rawdata diff --git a/templates/skills/file_manager/dependencies/pdfminer/psexceptions.py b/templates/skills/file_manager/dependencies/pdfminer/psexceptions.py new file mode 100644 index 00000000..b8291dc0 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/psexceptions.py @@ -0,0 +1,18 @@ +class PSException(Exception): + pass + + +class PSEOF(PSException): + pass + + +class PSSyntaxError(PSException): + pass + + +class PSTypeError(PSException): + pass + + +class PSValueError(PSException): + pass diff --git a/templates/skills/file_manager/dependencies/pdfminer/psparser.py b/templates/skills/file_manager/dependencies/pdfminer/psparser.py new file mode 100644 index 00000000..36172c7a --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/psparser.py @@ -0,0 +1,667 @@ +#!/usr/bin/env python3 + +# -*- coding: utf-8 -*- + +import logging +import re +from typing import ( + Any, + BinaryIO, + Dict, + Generic, + Iterator, + List, + Optional, + Tuple, + Type, + TypeVar, + Union, +) + +from . import settings, psexceptions +from .utils import choplist + +log = logging.getLogger(__name__) + + +# Adding aliases for these exceptions for backwards compatibility +PSException = psexceptions.PSException +PSEOF = psexceptions.PSEOF +PSSyntaxError = psexceptions.PSSyntaxError +PSTypeError = psexceptions.PSTypeError +PSValueError = psexceptions.PSValueError + + +class PSObject: + """Base class for all PS or PDF-related data types.""" + + pass + + +class PSLiteral(PSObject): + + """A class that represents a PostScript literal. + + Postscript literals are used as identifiers, such as + variable names, property names and dictionary keys. + Literals are case sensitive and denoted by a preceding + slash sign (e.g. "/Name") + + Note: Do not create an instance of PSLiteral directly. + Always use PSLiteralTable.intern(). + """ + + NameType = Union[str, bytes] + + def __init__(self, name: NameType) -> None: + self.name = name + + def __repr__(self) -> str: + name = self.name + return "/%r" % name + + +class PSKeyword(PSObject): + + """A class that represents a PostScript keyword. + + PostScript keywords are a dozen of predefined words. + Commands and directives in PostScript are expressed by keywords. + They are also used to denote the content boundaries. + + Note: Do not create an instance of PSKeyword directly. + Always use PSKeywordTable.intern(). + """ + + def __init__(self, name: bytes) -> None: + self.name = name + + def __repr__(self) -> str: + name = self.name + return "/%r" % name + + +_SymbolT = TypeVar("_SymbolT", PSLiteral, PSKeyword) + + +class PSSymbolTable(Generic[_SymbolT]): + """A utility class for storing PSLiteral/PSKeyword objects. + + Interned objects can be checked its identity with "is" operator. + """ + + def __init__(self, klass: Type[_SymbolT]) -> None: + self.dict: Dict[PSLiteral.NameType, _SymbolT] = {} + self.klass: Type[_SymbolT] = klass + + def intern(self, name: PSLiteral.NameType) -> _SymbolT: + if name in self.dict: + lit = self.dict[name] + else: + # Type confusion issue: PSKeyword always takes bytes as name + # PSLiteral uses either str or bytes + lit = self.klass(name) # type: ignore[arg-type] + self.dict[name] = lit + return lit + + +PSLiteralTable = PSSymbolTable(PSLiteral) +PSKeywordTable = PSSymbolTable(PSKeyword) +LIT = PSLiteralTable.intern +KWD = PSKeywordTable.intern +KEYWORD_PROC_BEGIN = KWD(b"{") +KEYWORD_PROC_END = KWD(b"}") +KEYWORD_ARRAY_BEGIN = KWD(b"[") +KEYWORD_ARRAY_END = KWD(b"]") +KEYWORD_DICT_BEGIN = KWD(b"<<") +KEYWORD_DICT_END = KWD(b">>") + + +def literal_name(x: object) -> Any: + if not isinstance(x, PSLiteral): + if settings.STRICT: + raise PSTypeError(f"Literal required: {x!r}") + else: + name = x + else: + name = x.name + if not isinstance(name, str): + try: + name = str(name, "utf-8") + except Exception: + pass + return name + + +def keyword_name(x: object) -> Any: + if not isinstance(x, PSKeyword): + if settings.STRICT: + raise PSTypeError("Keyword required: %r" % x) + else: + name = x + else: + name = str(x.name, "utf-8", "ignore") + return name + + +EOL = re.compile(rb"[\r\n]") +SPC = re.compile(rb"\s") +NONSPC = re.compile(rb"\S") +HEX = re.compile(rb"[0-9a-fA-F]") +END_LITERAL = re.compile(rb"[#/%\[\]()<>{}\s]") +END_HEX_STRING = re.compile(rb"[^\s0-9a-fA-F]") +HEX_PAIR = re.compile(rb"[0-9a-fA-F]{2}|.") +END_NUMBER = re.compile(rb"[^0-9]") +END_KEYWORD = re.compile(rb"[#/%\[\]()<>{}\s]") +END_STRING = re.compile(rb"[()\134]") +OCT_STRING = re.compile(rb"[0-7]") +ESC_STRING = { + b"b": 8, + b"t": 9, + b"n": 10, + b"f": 12, + b"r": 13, + b"(": 40, + b")": 41, + b"\\": 92, +} + + +PSBaseParserToken = Union[float, bool, PSLiteral, PSKeyword, bytes] + + +class PSBaseParser: + + """Most basic PostScript parser that performs only tokenization.""" + + BUFSIZ = 4096 + + def __init__(self, fp: BinaryIO) -> None: + self.fp = fp + self.seek(0) + + def __repr__(self) -> str: + return "<%s: %r, bufpos=%d>" % (self.__class__.__name__, self.fp, self.bufpos) + + def flush(self) -> None: + return + + def close(self) -> None: + self.flush() + return + + def tell(self) -> int: + return self.bufpos + self.charpos + + def poll(self, pos: Optional[int] = None, n: int = 80) -> None: + pos0 = self.fp.tell() + if not pos: + pos = self.bufpos + self.charpos + self.fp.seek(pos) + log.debug("poll(%d): %r", pos, self.fp.read(n)) + self.fp.seek(pos0) + return + + def seek(self, pos: int) -> None: + """Seeks the parser to the given position.""" + log.debug("seek: %r", pos) + self.fp.seek(pos) + # reset the status for nextline() + self.bufpos = pos + self.buf = b"" + self.charpos = 0 + # reset the status for nexttoken() + self._parse1 = self._parse_main + self._curtoken = b"" + self._curtokenpos = 0 + self._tokens: List[Tuple[int, PSBaseParserToken]] = [] + return + + def fillbuf(self) -> None: + if self.charpos < len(self.buf): + return + # fetch next chunk. + self.bufpos = self.fp.tell() + self.buf = self.fp.read(self.BUFSIZ) + if not self.buf: + raise PSEOF("Unexpected EOF") + self.charpos = 0 + return + + def nextline(self) -> Tuple[int, bytes]: + """Fetches a next line that ends either with \\r or \\n.""" + linebuf = b"" + linepos = self.bufpos + self.charpos + eol = False + while 1: + self.fillbuf() + if eol: + c = self.buf[self.charpos : self.charpos + 1] + # handle b'\r\n' + if c == b"\n": + linebuf += c + self.charpos += 1 + break + m = EOL.search(self.buf, self.charpos) + if m: + linebuf += self.buf[self.charpos : m.end(0)] + self.charpos = m.end(0) + if linebuf[-1:] == b"\r": + eol = True + else: + break + else: + linebuf += self.buf[self.charpos :] + self.charpos = len(self.buf) + log.debug("nextline: %r, %r", linepos, linebuf) + + return (linepos, linebuf) + + def revreadlines(self) -> Iterator[bytes]: + """Fetches a next line backword. + + This is used to locate the trailers at the end of a file. + """ + self.fp.seek(0, 2) + pos = self.fp.tell() + buf = b"" + while 0 < pos: + prevpos = pos + pos = max(0, pos - self.BUFSIZ) + self.fp.seek(pos) + s = self.fp.read(prevpos - pos) + if not s: + break + while 1: + n = max(s.rfind(b"\r"), s.rfind(b"\n")) + if n == -1: + buf = s + buf + break + yield s[n:] + buf + s = s[:n] + buf = b"" + return + + def _parse_main(self, s: bytes, i: int) -> int: + m = NONSPC.search(s, i) + if not m: + return len(s) + j = m.start(0) + c = s[j : j + 1] + self._curtokenpos = self.bufpos + j + if c == b"%": + self._curtoken = b"%" + self._parse1 = self._parse_comment + return j + 1 + elif c == b"/": + self._curtoken = b"" + self._parse1 = self._parse_literal + return j + 1 + elif c in b"-+" or c.isdigit(): + self._curtoken = c + self._parse1 = self._parse_number + return j + 1 + elif c == b".": + self._curtoken = c + self._parse1 = self._parse_float + return j + 1 + elif c.isalpha(): + self._curtoken = c + self._parse1 = self._parse_keyword + return j + 1 + elif c == b"(": + self._curtoken = b"" + self.paren = 1 + self._parse1 = self._parse_string + return j + 1 + elif c == b"<": + self._curtoken = b"" + self._parse1 = self._parse_wopen + return j + 1 + elif c == b">": + self._curtoken = b"" + self._parse1 = self._parse_wclose + return j + 1 + elif c == b"\x00": + return j + 1 + else: + self._add_token(KWD(c)) + return j + 1 + + def _add_token(self, obj: PSBaseParserToken) -> None: + self._tokens.append((self._curtokenpos, obj)) + return + + def _parse_comment(self, s: bytes, i: int) -> int: + m = EOL.search(s, i) + if not m: + self._curtoken += s[i:] + return len(s) + j = m.start(0) + self._curtoken += s[i:j] + self._parse1 = self._parse_main + # We ignore comments. + # self._tokens.append(self._curtoken) + return j + + def _parse_literal(self, s: bytes, i: int) -> int: + m = END_LITERAL.search(s, i) + if not m: + self._curtoken += s[i:] + return len(s) + j = m.start(0) + self._curtoken += s[i:j] + c = s[j : j + 1] + if c == b"#": + self.hex = b"" + self._parse1 = self._parse_literal_hex + return j + 1 + try: + name: Union[str, bytes] = str(self._curtoken, "utf-8") + except Exception: + name = self._curtoken + self._add_token(LIT(name)) + self._parse1 = self._parse_main + return j + + def _parse_literal_hex(self, s: bytes, i: int) -> int: + c = s[i : i + 1] + if HEX.match(c) and len(self.hex) < 2: + self.hex += c + return i + 1 + if self.hex: + self._curtoken += bytes((int(self.hex, 16),)) + self._parse1 = self._parse_literal + return i + + def _parse_number(self, s: bytes, i: int) -> int: + m = END_NUMBER.search(s, i) + if not m: + self._curtoken += s[i:] + return len(s) + j = m.start(0) + self._curtoken += s[i:j] + c = s[j : j + 1] + if c == b".": + self._curtoken += c + self._parse1 = self._parse_float + return j + 1 + try: + self._add_token(int(self._curtoken)) + except ValueError: + pass + self._parse1 = self._parse_main + return j + + def _parse_float(self, s: bytes, i: int) -> int: + m = END_NUMBER.search(s, i) + if not m: + self._curtoken += s[i:] + return len(s) + j = m.start(0) + self._curtoken += s[i:j] + try: + self._add_token(float(self._curtoken)) + except ValueError: + pass + self._parse1 = self._parse_main + return j + + def _parse_keyword(self, s: bytes, i: int) -> int: + m = END_KEYWORD.search(s, i) + if m: + j = m.start(0) + self._curtoken += s[i:j] + else: + # Use the rest of the stream if no non-keyword character is found. This + # can happen if the keyword is the final bytes of the stream + # (https://github.com/pdfminer/pdfminer.six/issues/884). + j = len(s) + self._curtoken += s[i:] + if self._curtoken == b"true": + token: Union[bool, PSKeyword] = True + elif self._curtoken == b"false": + token = False + else: + token = KWD(self._curtoken) + self._add_token(token) + self._parse1 = self._parse_main + return j + + def _parse_string(self, s: bytes, i: int) -> int: + m = END_STRING.search(s, i) + if not m: + self._curtoken += s[i:] + return len(s) + j = m.start(0) + self._curtoken += s[i:j] + c = s[j : j + 1] + if c == b"\\": + self.oct = b"" + self._parse1 = self._parse_string_1 + return j + 1 + if c == b"(": + self.paren += 1 + self._curtoken += c + return j + 1 + if c == b")": + self.paren -= 1 + if self.paren: + # WTF, they said balanced parens need no special treatment. + self._curtoken += c + return j + 1 + self._add_token(self._curtoken) + self._parse1 = self._parse_main + return j + 1 + + def _parse_string_1(self, s: bytes, i: int) -> int: + """Parse literal strings + + PDF Reference 3.2.3 + """ + c = s[i : i + 1] + if OCT_STRING.match(c) and len(self.oct) < 3: + self.oct += c + return i + 1 + + elif self.oct: + self._curtoken += bytes((int(self.oct, 8),)) + self._parse1 = self._parse_string + return i + + elif c in ESC_STRING: + self._curtoken += bytes((ESC_STRING[c],)) + + elif c == b"\r" and len(s) > i + 1 and s[i + 1 : i + 2] == b"\n": + # If current and next character is \r\n skip both because enters + # after a \ are ignored + i += 1 + + # default action + self._parse1 = self._parse_string + return i + 1 + + def _parse_wopen(self, s: bytes, i: int) -> int: + c = s[i : i + 1] + if c == b"<": + self._add_token(KEYWORD_DICT_BEGIN) + self._parse1 = self._parse_main + i += 1 + else: + self._parse1 = self._parse_hexstring + return i + + def _parse_wclose(self, s: bytes, i: int) -> int: + c = s[i : i + 1] + if c == b">": + self._add_token(KEYWORD_DICT_END) + i += 1 + self._parse1 = self._parse_main + return i + + def _parse_hexstring(self, s: bytes, i: int) -> int: + m = END_HEX_STRING.search(s, i) + if not m: + self._curtoken += s[i:] + return len(s) + j = m.start(0) + self._curtoken += s[i:j] + token = HEX_PAIR.sub( + lambda m: bytes((int(m.group(0), 16),)), SPC.sub(b"", self._curtoken) + ) + self._add_token(token) + self._parse1 = self._parse_main + return j + + def nexttoken(self) -> Tuple[int, PSBaseParserToken]: + while not self._tokens: + self.fillbuf() + self.charpos = self._parse1(self.buf, self.charpos) + token = self._tokens.pop(0) + log.debug("nexttoken: %r", token) + return token + + +# Stack slots may by occupied by any of: +# * the PSBaseParserToken types +# * list (via KEYWORD_ARRAY) +# * dict (via KEYWORD_DICT) +# * subclass-specific extensions (e.g. PDFStream, PDFObjRef) via ExtraT +ExtraT = TypeVar("ExtraT") +PSStackType = Union[float, bool, PSLiteral, bytes, List, Dict, ExtraT] +PSStackEntry = Tuple[int, PSStackType[ExtraT]] + + +class PSStackParser(PSBaseParser, Generic[ExtraT]): + def __init__(self, fp: BinaryIO) -> None: + PSBaseParser.__init__(self, fp) + self.reset() + return + + def reset(self) -> None: + self.context: List[Tuple[int, Optional[str], List[PSStackEntry[ExtraT]]]] = [] + self.curtype: Optional[str] = None + self.curstack: List[PSStackEntry[ExtraT]] = [] + self.results: List[PSStackEntry[ExtraT]] = [] + return + + def seek(self, pos: int) -> None: + PSBaseParser.seek(self, pos) + self.reset() + return + + def push(self, *objs: PSStackEntry[ExtraT]) -> None: + self.curstack.extend(objs) + return + + def pop(self, n: int) -> List[PSStackEntry[ExtraT]]: + objs = self.curstack[-n:] + self.curstack[-n:] = [] + return objs + + def popall(self) -> List[PSStackEntry[ExtraT]]: + objs = self.curstack + self.curstack = [] + return objs + + def add_results(self, *objs: PSStackEntry[ExtraT]) -> None: + try: + log.debug("add_results: %r", objs) + except Exception: + log.debug("add_results: (unprintable object)") + self.results.extend(objs) + return + + def start_type(self, pos: int, type: str) -> None: + self.context.append((pos, self.curtype, self.curstack)) + (self.curtype, self.curstack) = (type, []) + log.debug("start_type: pos=%r, type=%r", pos, type) + return + + def end_type(self, type: str) -> Tuple[int, List[PSStackType[ExtraT]]]: + if self.curtype != type: + raise PSTypeError(f"Type mismatch: {self.curtype!r} != {type!r}") + objs = [obj for (_, obj) in self.curstack] + (pos, self.curtype, self.curstack) = self.context.pop() + log.debug("end_type: pos=%r, type=%r, objs=%r", pos, type, objs) + return (pos, objs) + + def do_keyword(self, pos: int, token: PSKeyword) -> None: + return + + def nextobject(self) -> PSStackEntry[ExtraT]: + """Yields a list of objects. + + Arrays and dictionaries are represented as Python lists and + dictionaries. + + :return: keywords, literals, strings, numbers, arrays and dictionaries. + """ + while not self.results: + (pos, token) = self.nexttoken() + if isinstance(token, (int, float, bool, str, bytes, PSLiteral)): + # normal token + self.push((pos, token)) + elif token == KEYWORD_ARRAY_BEGIN: + # begin array + self.start_type(pos, "a") + elif token == KEYWORD_ARRAY_END: + # end array + try: + self.push(self.end_type("a")) + except PSTypeError: + if settings.STRICT: + raise + elif token == KEYWORD_DICT_BEGIN: + # begin dictionary + self.start_type(pos, "d") + elif token == KEYWORD_DICT_END: + # end dictionary + try: + (pos, objs) = self.end_type("d") + if len(objs) % 2 != 0: + error_msg = "Invalid dictionary construct: %r" % objs + raise PSSyntaxError(error_msg) + d = { + literal_name(k): v + for (k, v) in choplist(2, objs) + if v is not None + } + self.push((pos, d)) + except PSTypeError: + if settings.STRICT: + raise + elif token == KEYWORD_PROC_BEGIN: + # begin proc + self.start_type(pos, "p") + elif token == KEYWORD_PROC_END: + # end proc + try: + self.push(self.end_type("p")) + except PSTypeError: + if settings.STRICT: + raise + elif isinstance(token, PSKeyword): + log.debug( + "do_keyword: pos=%r, token=%r, stack=%r", pos, token, self.curstack + ) + self.do_keyword(pos, token) + else: + log.error( + "unknown token: pos=%r, token=%r, stack=%r", + pos, + token, + self.curstack, + ) + self.do_keyword(pos, token) + raise PSException + if self.context: + continue + else: + self.flush() + obj = self.results.pop(0) + try: + log.debug("nextobject: %r", obj) + except Exception: + log.debug("nextobject: (unprintable object)") + return obj diff --git a/templates/skills/file_manager/dependencies/pdfminer/py.typed b/templates/skills/file_manager/dependencies/pdfminer/py.typed new file mode 100644 index 00000000..e69de29b diff --git a/templates/skills/file_manager/dependencies/pdfminer/runlength.py b/templates/skills/file_manager/dependencies/pdfminer/runlength.py new file mode 100644 index 00000000..72096600 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/runlength.py @@ -0,0 +1,40 @@ +# +# RunLength decoder (Adobe version) implementation based on PDF Reference +# version 1.4 section 3.3.4. +# +# * public domain * +# + + +def rldecode(data: bytes) -> bytes: + """ + RunLength decoder (Adobe version) implementation based on PDF Reference + version 1.4 section 3.3.4: + The RunLengthDecode filter decodes data that has been encoded in a + simple byte-oriented format based on run length. The encoded data + is a sequence of runs, where each run consists of a length byte + followed by 1 to 128 bytes of data. If the length byte is in the + range 0 to 127, the following length + 1 (1 to 128) bytes are + copied literally during decompression. If length is in the range + 129 to 255, the following single byte is to be copied 257 - length + (2 to 128) times during decompression. A length value of 128 + denotes EOD. + """ + decoded = b"" + i = 0 + while i < len(data): + length = data[i] + if length == 128: + break + + if length >= 0 and length < 128: + for j in range(i + 1, (i + 1) + (length + 1)): + decoded += bytes((data[j],)) + i = (i + 1) + (length + 1) + + if length > 128: + run = bytes((data[i + 1],)) * (257 - length) + decoded += run + i = (i + 1) + 1 + + return decoded diff --git a/templates/skills/file_manager/dependencies/pdfminer/settings.py b/templates/skills/file_manager/dependencies/pdfminer/settings.py new file mode 100644 index 00000000..810077a0 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/settings.py @@ -0,0 +1 @@ +STRICT = False diff --git a/templates/skills/file_manager/dependencies/pdfminer/utils.py b/templates/skills/file_manager/dependencies/pdfminer/utils.py new file mode 100644 index 00000000..fae1f643 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pdfminer/utils.py @@ -0,0 +1,806 @@ +""" +Miscellaneous Routines. +""" +import io +import pathlib +import string +import struct +from html import escape +from typing import ( + Any, + BinaryIO, + Callable, + Dict, + Generic, + Iterable, + Iterator, + List, + Optional, + Set, + TextIO, + Tuple, + TypeVar, + Union, + TYPE_CHECKING, + cast, +) + +from .pdfexceptions import PDFTypeError, PDFValueError + +if TYPE_CHECKING: + from .layout import LTComponent + +import charset_normalizer # For str encoding detection + +# from sys import maxint as INF doesn't work anymore under Python3, but PDF +# still uses 32 bits ints +INF = (1 << 31) - 1 + + +FileOrName = Union[pathlib.PurePath, str, io.IOBase] +AnyIO = Union[TextIO, BinaryIO] + + +class open_filename: + """ + Context manager that allows opening a filename + (str or pathlib.PurePath type is supported) and closes it on exit, + (just like `open`), but does nothing for file-like objects. + """ + + def __init__(self, filename: FileOrName, *args: Any, **kwargs: Any) -> None: + if isinstance(filename, pathlib.PurePath): + filename = str(filename) + if isinstance(filename, str): + self.file_handler: AnyIO = open(filename, *args, **kwargs) + self.closing = True + elif isinstance(filename, io.IOBase): + self.file_handler = cast(AnyIO, filename) + self.closing = False + else: + raise PDFTypeError("Unsupported input type: %s" % type(filename)) + + def __enter__(self) -> AnyIO: + return self.file_handler + + def __exit__(self, exc_type: object, exc_val: object, exc_tb: object) -> None: + if self.closing: + self.file_handler.close() + + +def make_compat_bytes(in_str: str) -> bytes: + "Converts to bytes, encoding to unicode." + assert isinstance(in_str, str), str(type(in_str)) + return in_str.encode() + + +def make_compat_str(o: object) -> str: + """Converts everything to string, if bytes guessing the encoding.""" + if isinstance(o, bytes): + enc = charset_normalizer.detect(o) + try: + return o.decode(enc["encoding"]) + except UnicodeDecodeError: + return str(o) + else: + return str(o) + + +def shorten_str(s: str, size: int) -> str: + if size < 7: + return s[:size] + if len(s) > size: + length = (size - 5) // 2 + return f"{s[:length]} ... {s[-length:]}" + else: + return s + + +def compatible_encode_method( + bytesorstring: Union[bytes, str], encoding: str = "utf-8", erraction: str = "ignore" +) -> str: + """When Py2 str.encode is called, it often means bytes.encode in Py3. + + This does either. + """ + if isinstance(bytesorstring, str): + return bytesorstring + assert isinstance(bytesorstring, bytes), str(type(bytesorstring)) + return bytesorstring.decode(encoding, erraction) + + +def paeth_predictor(left: int, above: int, upper_left: int) -> int: + # From http://www.libpng.org/pub/png/spec/1.2/PNG-Filters.html + # Initial estimate + p = left + above - upper_left + # Distances to a,b,c + pa = abs(p - left) + pb = abs(p - above) + pc = abs(p - upper_left) + + # Return nearest of a,b,c breaking ties in order a,b,c + if pa <= pb and pa <= pc: + return left + elif pb <= pc: + return above + else: + return upper_left + + +def apply_png_predictor( + pred: int, colors: int, columns: int, bitspercomponent: int, data: bytes +) -> bytes: + """Reverse the effect of the PNG predictor + + Documentation: http://www.libpng.org/pub/png/spec/1.2/PNG-Filters.html + """ + if bitspercomponent not in [8, 1]: + msg = "Unsupported `bitspercomponent': %d" % bitspercomponent + raise PDFValueError(msg) + + nbytes = colors * columns * bitspercomponent // 8 + bpp = colors * bitspercomponent // 8 # number of bytes per complete pixel + buf = [] + line_above = list(b"\x00" * columns) + for scanline_i in range(0, len(data), nbytes + 1): + filter_type = data[scanline_i] + line_encoded = data[scanline_i + 1 : scanline_i + 1 + nbytes] + raw = [] + + if filter_type == 0: + # Filter type 0: None + raw = list(line_encoded) + + elif filter_type == 1: + # Filter type 1: Sub + # To reverse the effect of the Sub() filter after decompression, + # output the following value: + # Raw(x) = Sub(x) + Raw(x - bpp) + # (computed mod 256), where Raw() refers to the bytes already + # decoded. + for j, sub_x in enumerate(line_encoded): + if j - bpp < 0: + raw_x_bpp = 0 + else: + raw_x_bpp = int(raw[j - bpp]) + raw_x = (sub_x + raw_x_bpp) & 255 + raw.append(raw_x) + + elif filter_type == 2: + # Filter type 2: Up + # To reverse the effect of the Up() filter after decompression, + # output the following value: + # Raw(x) = Up(x) + Prior(x) + # (computed mod 256), where Prior() refers to the decoded bytes of + # the prior scanline. + for (up_x, prior_x) in zip(line_encoded, line_above): + raw_x = (up_x + prior_x) & 255 + raw.append(raw_x) + + elif filter_type == 3: + # Filter type 3: Average + # To reverse the effect of the Average() filter after + # decompression, output the following value: + # Raw(x) = Average(x) + floor((Raw(x-bpp)+Prior(x))/2) + # where the result is computed mod 256, but the prediction is + # calculated in the same way as for encoding. Raw() refers to the + # bytes already decoded, and Prior() refers to the decoded bytes of + # the prior scanline. + for j, average_x in enumerate(line_encoded): + if j - bpp < 0: + raw_x_bpp = 0 + else: + raw_x_bpp = int(raw[j - bpp]) + prior_x = int(line_above[j]) + raw_x = (average_x + (raw_x_bpp + prior_x) // 2) & 255 + raw.append(raw_x) + + elif filter_type == 4: + # Filter type 4: Paeth + # To reverse the effect of the Paeth() filter after decompression, + # output the following value: + # Raw(x) = Paeth(x) + # + PaethPredictor(Raw(x-bpp), Prior(x), Prior(x-bpp)) + # (computed mod 256), where Raw() and Prior() refer to bytes + # already decoded. Exactly the same PaethPredictor() function is + # used by both encoder and decoder. + for j, paeth_x in enumerate(line_encoded): + if j - bpp < 0: + raw_x_bpp = 0 + prior_x_bpp = 0 + else: + raw_x_bpp = int(raw[j - bpp]) + prior_x_bpp = int(line_above[j - bpp]) + prior_x = int(line_above[j]) + paeth = paeth_predictor(raw_x_bpp, prior_x, prior_x_bpp) + raw_x = (paeth_x + paeth) & 255 + raw.append(raw_x) + + else: + raise PDFValueError("Unsupported predictor value: %d" % filter_type) + + buf.extend(raw) + line_above = raw + return bytes(buf) + + +Point = Tuple[float, float] +Rect = Tuple[float, float, float, float] +Matrix = Tuple[float, float, float, float, float, float] +PathSegment = Union[ + Tuple[str], # Literal['h'] + Tuple[str, float, float], # Literal['m', 'l'] + Tuple[str, float, float, float, float], # Literal['v', 'y'] + Tuple[str, float, float, float, float, float, float], +] # Literal['c'] + +# Matrix operations +MATRIX_IDENTITY: Matrix = (1, 0, 0, 1, 0, 0) + + +def mult_matrix(m1: Matrix, m0: Matrix) -> Matrix: + (a1, b1, c1, d1, e1, f1) = m1 + (a0, b0, c0, d0, e0, f0) = m0 + """Returns the multiplication of two matrices.""" + return ( + a0 * a1 + c0 * b1, + b0 * a1 + d0 * b1, + a0 * c1 + c0 * d1, + b0 * c1 + d0 * d1, + a0 * e1 + c0 * f1 + e0, + b0 * e1 + d0 * f1 + f0, + ) + + +def translate_matrix(m: Matrix, v: Point) -> Matrix: + """Translates a matrix by (x, y).""" + (a, b, c, d, e, f) = m + (x, y) = v + return a, b, c, d, x * a + y * c + e, x * b + y * d + f + + +def apply_matrix_pt(m: Matrix, v: Point) -> Point: + (a, b, c, d, e, f) = m + (x, y) = v + """Applies a matrix to a point.""" + return a * x + c * y + e, b * x + d * y + f + + +def apply_matrix_norm(m: Matrix, v: Point) -> Point: + """Equivalent to apply_matrix_pt(M, (p,q)) - apply_matrix_pt(M, (0,0))""" + (a, b, c, d, e, f) = m + (p, q) = v + return a * p + c * q, b * p + d * q + + +# Utility functions + + +def isnumber(x: object) -> bool: + return isinstance(x, (int, float)) + + +_T = TypeVar("_T") + + +def uniq(objs: Iterable[_T]) -> Iterator[_T]: + """Eliminates duplicated elements.""" + done = set() + for obj in objs: + if obj in done: + continue + done.add(obj) + yield obj + return + + +def fsplit(pred: Callable[[_T], bool], objs: Iterable[_T]) -> Tuple[List[_T], List[_T]]: + """Split a list into two classes according to the predicate.""" + t = [] + f = [] + for obj in objs: + if pred(obj): + t.append(obj) + else: + f.append(obj) + return t, f + + +def drange(v0: float, v1: float, d: int) -> range: + """Returns a discrete range.""" + return range(int(v0) // d, int(v1 + d) // d) + + +def get_bound(pts: Iterable[Point]) -> Rect: + """Compute a minimal rectangle that covers all the points.""" + limit: Rect = (INF, INF, -INF, -INF) + (x0, y0, x1, y1) = limit + for (x, y) in pts: + x0 = min(x0, x) + y0 = min(y0, y) + x1 = max(x1, x) + y1 = max(y1, y) + return x0, y0, x1, y1 + + +def pick( + seq: Iterable[_T], func: Callable[[_T], float], maxobj: Optional[_T] = None +) -> Optional[_T]: + """Picks the object obj where func(obj) has the highest value.""" + maxscore = None + for obj in seq: + score = func(obj) + if maxscore is None or maxscore < score: + (maxscore, maxobj) = (score, obj) + return maxobj + + +def choplist(n: int, seq: Iterable[_T]) -> Iterator[Tuple[_T, ...]]: + """Groups every n elements of the list.""" + r = [] + for x in seq: + r.append(x) + if len(r) == n: + yield tuple(r) + r = [] + return + + +def nunpack(s: bytes, default: int = 0) -> int: + """Unpacks 1 to 4 or 8 byte integers (big endian).""" + length = len(s) + if not length: + return default + elif length == 1: + return ord(s) + elif length == 2: + return cast(int, struct.unpack(">H", s)[0]) + elif length == 3: + return cast(int, struct.unpack(">L", b"\x00" + s)[0]) + elif length == 4: + return cast(int, struct.unpack(">L", s)[0]) + elif length == 8: + return cast(int, struct.unpack(">Q", s)[0]) + else: + raise PDFTypeError("invalid length: %d" % length) + + +PDFDocEncoding = "".join( + chr(x) + for x in ( + 0x0000, + 0x0001, + 0x0002, + 0x0003, + 0x0004, + 0x0005, + 0x0006, + 0x0007, + 0x0008, + 0x0009, + 0x000A, + 0x000B, + 0x000C, + 0x000D, + 0x000E, + 0x000F, + 0x0010, + 0x0011, + 0x0012, + 0x0013, + 0x0014, + 0x0015, + 0x0017, + 0x0017, + 0x02D8, + 0x02C7, + 0x02C6, + 0x02D9, + 0x02DD, + 0x02DB, + 0x02DA, + 0x02DC, + 0x0020, + 0x0021, + 0x0022, + 0x0023, + 0x0024, + 0x0025, + 0x0026, + 0x0027, + 0x0028, + 0x0029, + 0x002A, + 0x002B, + 0x002C, + 0x002D, + 0x002E, + 0x002F, + 0x0030, + 0x0031, + 0x0032, + 0x0033, + 0x0034, + 0x0035, + 0x0036, + 0x0037, + 0x0038, + 0x0039, + 0x003A, + 0x003B, + 0x003C, + 0x003D, + 0x003E, + 0x003F, + 0x0040, + 0x0041, + 0x0042, + 0x0043, + 0x0044, + 0x0045, + 0x0046, + 0x0047, + 0x0048, + 0x0049, + 0x004A, + 0x004B, + 0x004C, + 0x004D, + 0x004E, + 0x004F, + 0x0050, + 0x0051, + 0x0052, + 0x0053, + 0x0054, + 0x0055, + 0x0056, + 0x0057, + 0x0058, + 0x0059, + 0x005A, + 0x005B, + 0x005C, + 0x005D, + 0x005E, + 0x005F, + 0x0060, + 0x0061, + 0x0062, + 0x0063, + 0x0064, + 0x0065, + 0x0066, + 0x0067, + 0x0068, + 0x0069, + 0x006A, + 0x006B, + 0x006C, + 0x006D, + 0x006E, + 0x006F, + 0x0070, + 0x0071, + 0x0072, + 0x0073, + 0x0074, + 0x0075, + 0x0076, + 0x0077, + 0x0078, + 0x0079, + 0x007A, + 0x007B, + 0x007C, + 0x007D, + 0x007E, + 0x0000, + 0x2022, + 0x2020, + 0x2021, + 0x2026, + 0x2014, + 0x2013, + 0x0192, + 0x2044, + 0x2039, + 0x203A, + 0x2212, + 0x2030, + 0x201E, + 0x201C, + 0x201D, + 0x2018, + 0x2019, + 0x201A, + 0x2122, + 0xFB01, + 0xFB02, + 0x0141, + 0x0152, + 0x0160, + 0x0178, + 0x017D, + 0x0131, + 0x0142, + 0x0153, + 0x0161, + 0x017E, + 0x0000, + 0x20AC, + 0x00A1, + 0x00A2, + 0x00A3, + 0x00A4, + 0x00A5, + 0x00A6, + 0x00A7, + 0x00A8, + 0x00A9, + 0x00AA, + 0x00AB, + 0x00AC, + 0x0000, + 0x00AE, + 0x00AF, + 0x00B0, + 0x00B1, + 0x00B2, + 0x00B3, + 0x00B4, + 0x00B5, + 0x00B6, + 0x00B7, + 0x00B8, + 0x00B9, + 0x00BA, + 0x00BB, + 0x00BC, + 0x00BD, + 0x00BE, + 0x00BF, + 0x00C0, + 0x00C1, + 0x00C2, + 0x00C3, + 0x00C4, + 0x00C5, + 0x00C6, + 0x00C7, + 0x00C8, + 0x00C9, + 0x00CA, + 0x00CB, + 0x00CC, + 0x00CD, + 0x00CE, + 0x00CF, + 0x00D0, + 0x00D1, + 0x00D2, + 0x00D3, + 0x00D4, + 0x00D5, + 0x00D6, + 0x00D7, + 0x00D8, + 0x00D9, + 0x00DA, + 0x00DB, + 0x00DC, + 0x00DD, + 0x00DE, + 0x00DF, + 0x00E0, + 0x00E1, + 0x00E2, + 0x00E3, + 0x00E4, + 0x00E5, + 0x00E6, + 0x00E7, + 0x00E8, + 0x00E9, + 0x00EA, + 0x00EB, + 0x00EC, + 0x00ED, + 0x00EE, + 0x00EF, + 0x00F0, + 0x00F1, + 0x00F2, + 0x00F3, + 0x00F4, + 0x00F5, + 0x00F6, + 0x00F7, + 0x00F8, + 0x00F9, + 0x00FA, + 0x00FB, + 0x00FC, + 0x00FD, + 0x00FE, + 0x00FF, + ) +) + + +def decode_text(s: bytes) -> str: + """Decodes a PDFDocEncoding string to Unicode.""" + if s.startswith(b"\xfe\xff"): + return str(s[2:], "utf-16be", "ignore") + else: + return "".join(PDFDocEncoding[c] for c in s) + + +def enc(x: str) -> str: + """Encodes a string for SGML/XML/HTML""" + if isinstance(x, bytes): + return "" + return escape(x) + + +def bbox2str(bbox: Rect) -> str: + (x0, y0, x1, y1) = bbox + return f"{x0:.3f},{y0:.3f},{x1:.3f},{y1:.3f}" + + +def matrix2str(m: Matrix) -> str: + (a, b, c, d, e, f) = m + return f"[{a:.2f},{b:.2f},{c:.2f},{d:.2f}, ({e:.2f},{f:.2f})]" + + +def vecBetweenBoxes(obj1: "LTComponent", obj2: "LTComponent") -> Point: + """A distance function between two TextBoxes. + + Consider the bounding rectangle for obj1 and obj2. + Return vector between 2 boxes boundaries if they don't overlap, otherwise + returns vector betweeen boxes centers + + +------+..........+ (x1, y1) + | obj1 | : + +------+www+------+ + : | obj2 | + (x0, y0) +..........+------+ + """ + (x0, y0) = (min(obj1.x0, obj2.x0), min(obj1.y0, obj2.y0)) + (x1, y1) = (max(obj1.x1, obj2.x1), max(obj1.y1, obj2.y1)) + (ow, oh) = (x1 - x0, y1 - y0) + (iw, ih) = (ow - obj1.width - obj2.width, oh - obj1.height - obj2.height) + if iw < 0 and ih < 0: + # if one is inside another we compute euclidean distance + (xc1, yc1) = ((obj1.x0 + obj1.x1) / 2, (obj1.y0 + obj1.y1) / 2) + (xc2, yc2) = ((obj2.x0 + obj2.x1) / 2, (obj2.y0 + obj2.y1) / 2) + return xc1 - xc2, yc1 - yc2 + else: + return max(0, iw), max(0, ih) + + +LTComponentT = TypeVar("LTComponentT", bound="LTComponent") + + +class Plane(Generic[LTComponentT]): + """A set-like data structure for objects placed on a plane. + + Can efficiently find objects in a certain rectangular area. + It maintains two parallel lists of objects, each of + which is sorted by its x or y coordinate. + """ + + def __init__(self, bbox: Rect, gridsize: int = 50) -> None: + self._seq: List[LTComponentT] = [] # preserve the object order. + self._objs: Set[LTComponentT] = set() + self._grid: Dict[Point, List[LTComponentT]] = {} + self.gridsize = gridsize + (self.x0, self.y0, self.x1, self.y1) = bbox + + def __repr__(self) -> str: + return "" % list(self) + + def __iter__(self) -> Iterator[LTComponentT]: + return (obj for obj in self._seq if obj in self._objs) + + def __len__(self) -> int: + return len(self._objs) + + def __contains__(self, obj: object) -> bool: + return obj in self._objs + + def _getrange(self, bbox: Rect) -> Iterator[Point]: + (x0, y0, x1, y1) = bbox + if x1 <= self.x0 or self.x1 <= x0 or y1 <= self.y0 or self.y1 <= y0: + return + x0 = max(self.x0, x0) + y0 = max(self.y0, y0) + x1 = min(self.x1, x1) + y1 = min(self.y1, y1) + for grid_y in drange(y0, y1, self.gridsize): + for grid_x in drange(x0, x1, self.gridsize): + yield (grid_x, grid_y) + + def extend(self, objs: Iterable[LTComponentT]) -> None: + for obj in objs: + self.add(obj) + + def add(self, obj: LTComponentT) -> None: + """place an object.""" + for k in self._getrange((obj.x0, obj.y0, obj.x1, obj.y1)): + if k not in self._grid: + r: List[LTComponentT] = [] + self._grid[k] = r + else: + r = self._grid[k] + r.append(obj) + self._seq.append(obj) + self._objs.add(obj) + + def remove(self, obj: LTComponentT) -> None: + """displace an object.""" + for k in self._getrange((obj.x0, obj.y0, obj.x1, obj.y1)): + try: + self._grid[k].remove(obj) + except (KeyError, ValueError): + pass + self._objs.remove(obj) + + def find(self, bbox: Rect) -> Iterator[LTComponentT]: + """finds objects that are in a certain area.""" + (x0, y0, x1, y1) = bbox + done = set() + for k in self._getrange(bbox): + if k not in self._grid: + continue + for obj in self._grid[k]: + if obj in done: + continue + done.add(obj) + if obj.x1 <= x0 or x1 <= obj.x0 or obj.y1 <= y0 or y1 <= obj.y0: + continue + yield obj + + +ROMAN_ONES = ["i", "x", "c", "m"] +ROMAN_FIVES = ["v", "l", "d"] + + +def format_int_roman(value: int) -> str: + """Format a number as lowercase Roman numerals.""" + + assert 0 < value < 4000 + result: List[str] = [] + index = 0 + + while value != 0: + value, remainder = divmod(value, 10) + if remainder == 9: + result.insert(0, ROMAN_ONES[index]) + result.insert(1, ROMAN_ONES[index + 1]) + elif remainder == 4: + result.insert(0, ROMAN_ONES[index]) + result.insert(1, ROMAN_FIVES[index]) + else: + over_five = remainder >= 5 + if over_five: + result.insert(0, ROMAN_FIVES[index]) + remainder -= 5 + result.insert(1 if over_five else 0, ROMAN_ONES[index] * remainder) + index += 1 + + return "".join(result) + + +def format_int_alpha(value: int) -> str: + """Format a number as lowercase letters a-z, aa-zz, etc.""" + + assert value > 0 + result: List[str] = [] + + while value != 0: + value, remainder = divmod(value - 1, len(string.ascii_lowercase)) + result.append(string.ascii_lowercase[remainder]) + + result.reverse() + return "".join(result) diff --git a/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/INSTALLER b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/LICENSE b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/LICENSE new file mode 100644 index 00000000..bee14a47 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/LICENSE @@ -0,0 +1,27 @@ +pycparser -- A C parser in Python + +Copyright (c) 2008-2022, Eli Bendersky +All rights reserved. + +Redistribution and use in source and binary forms, with or without modification, +are permitted provided that the following conditions are met: + +* Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. +* Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. +* Neither the name of the copyright holder nor the names of its contributors may + be used to endorse or promote products derived from this software without + specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE +LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR +CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE +GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) +HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT +LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT +OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/METADATA b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/METADATA new file mode 100644 index 00000000..2c8038a3 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/METADATA @@ -0,0 +1,28 @@ +Metadata-Version: 2.1 +Name: pycparser +Version: 2.22 +Summary: C parser in Python +Home-page: https://github.com/eliben/pycparser +Author: Eli Bendersky +Author-email: eliben@gmail.com +Maintainer: Eli Bendersky +License: BSD-3-Clause +Platform: Cross Platform +Classifier: Development Status :: 5 - Production/Stable +Classifier: License :: OSI Approved :: BSD License +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: 3.9 +Classifier: Programming Language :: Python :: 3.10 +Classifier: Programming Language :: Python :: 3.11 +Classifier: Programming Language :: Python :: 3.12 +Requires-Python: >=3.8 +License-File: LICENSE + + + pycparser is a complete parser of the C language, written in + pure Python using the PLY parsing library. + It parses C code into an AST and can serve as a front-end for + C compilers or analysis tools. + + diff --git a/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/RECORD b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/RECORD new file mode 100644 index 00000000..0c6403e7 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/RECORD @@ -0,0 +1,42 @@ +pycparser-2.22.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +pycparser-2.22.dist-info/LICENSE,sha256=DIRjmTaep23de1xE_m0WSXQV_PAV9cu1CMJL-YuBxbE,1543 +pycparser-2.22.dist-info/METADATA,sha256=3XOB8nggH4ijl17DCjUhk7g6qioMJLprUlEkwYgZvW8,943 +pycparser-2.22.dist-info/RECORD,, +pycparser-2.22.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +pycparser-2.22.dist-info/WHEEL,sha256=G16H4A3IeoQmnOrYV4ueZGKSjhipXx8zc8nu9FGlvMA,92 +pycparser-2.22.dist-info/top_level.txt,sha256=c-lPcS74L_8KoH7IE6PQF5ofyirRQNV4VhkbSFIPeWM,10 +pycparser/__init__.py,sha256=hrf-AyuVYNHQGTD0Nv2bywxoTN3N1ZCs03m-9-QDS14,2918 +pycparser/__pycache__/__init__.cpython-311.pyc,, +pycparser/__pycache__/_ast_gen.cpython-311.pyc,, +pycparser/__pycache__/_build_tables.cpython-311.pyc,, +pycparser/__pycache__/ast_transforms.cpython-311.pyc,, +pycparser/__pycache__/c_ast.cpython-311.pyc,, +pycparser/__pycache__/c_generator.cpython-311.pyc,, +pycparser/__pycache__/c_lexer.cpython-311.pyc,, +pycparser/__pycache__/c_parser.cpython-311.pyc,, +pycparser/__pycache__/lextab.cpython-311.pyc,, +pycparser/__pycache__/plyparser.cpython-311.pyc,, +pycparser/__pycache__/yacctab.cpython-311.pyc,, +pycparser/_ast_gen.py,sha256=0JRVnDW-Jw-3IjVlo8je9rbAcp6Ko7toHAnB5zi7h0Q,10555 +pycparser/_build_tables.py,sha256=4d_UkIxJ4YfHTVn6xBzBA52wDo7qxg1B6aZAJYJas9Q,1087 +pycparser/_c_ast.cfg,sha256=ld5ezE9yzIJFIVAUfw7ezJSlMi4nXKNCzfmqjOyQTNo,4255 +pycparser/ast_transforms.py,sha256=GTMYlUgWmXd5wJVyovXY1qzzAqjxzCpVVg0664dKGBs,5691 +pycparser/c_ast.py,sha256=HWeOrfYdCY0u5XaYhE1i60uVyE3yMWdcxzECUX-DqJw,31445 +pycparser/c_generator.py,sha256=yi6Mcqxv88J5ue8k5-mVGxh3iJ37iD4QyF-sWcGjC-8,17772 +pycparser/c_lexer.py,sha256=RSUjq0SRH8dkvwrQslBIZY2AXOrpQpe-oO1udJXotZk,17186 +pycparser/c_parser.py,sha256=WUnIHNydl32QBuRUqrqk-F2lyB6WRP4BUYFELqVETyw,74282 +pycparser/lextab.py,sha256=Nc3I0_D8Xlf-BOpfOKkEvFw-rPuFPPwAjkcLubwTCU4,8554 +pycparser/ply/__init__.py,sha256=q4s86QwRsYRa20L9ueSxfh-hPihpftBjDOvYa2_SS2Y,102 +pycparser/ply/__pycache__/__init__.cpython-311.pyc,, +pycparser/ply/__pycache__/cpp.cpython-311.pyc,, +pycparser/ply/__pycache__/ctokens.cpython-311.pyc,, +pycparser/ply/__pycache__/lex.cpython-311.pyc,, +pycparser/ply/__pycache__/yacc.cpython-311.pyc,, +pycparser/ply/__pycache__/ygen.cpython-311.pyc,, +pycparser/ply/cpp.py,sha256=UtC3ylTWp5_1MKA-PLCuwKQR8zSOnlGuGGIdzj8xS98,33282 +pycparser/ply/ctokens.py,sha256=MKksnN40TehPhgVfxCJhjj_BjL943apreABKYz-bl0Y,3177 +pycparser/ply/lex.py,sha256=rCMi0yjlZmjH5SNXj_Yds1VxSDkaG2thS7351YvfN-I,42926 +pycparser/ply/yacc.py,sha256=eatSDkRLgRr6X3-hoDk_SQQv065R0BdL2K7fQ54CgVM,137323 +pycparser/ply/ygen.py,sha256=2JYNeYtrPz1JzLSLO3d4GsS8zJU8jY_I_CR1VI9gWrA,2251 +pycparser/plyparser.py,sha256=8tLOoEytcapvWrr1JfCf7Dog-wulBtS1YrDs8S7JfMo,4875 +pycparser/yacctab.py,sha256=B6ck8QEPnRi04VSxKEL6xHaP8sEEsTbWtwsjfKHABgM,209738 diff --git a/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/REQUESTED b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/REQUESTED new file mode 100644 index 00000000..e69de29b diff --git a/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/WHEEL b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/WHEEL new file mode 100644 index 00000000..becc9a66 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/top_level.txt b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/top_level.txt new file mode 100644 index 00000000..dc1c9e10 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser-2.22.dist-info/top_level.txt @@ -0,0 +1 @@ +pycparser diff --git a/templates/skills/file_manager/dependencies/pycparser/__init__.py b/templates/skills/file_manager/dependencies/pycparser/__init__.py new file mode 100644 index 00000000..bf4b0d41 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/__init__.py @@ -0,0 +1,93 @@ +#----------------------------------------------------------------- +# pycparser: __init__.py +# +# This package file exports some convenience functions for +# interacting with pycparser +# +# Eli Bendersky [https://eli.thegreenplace.net/] +# License: BSD +#----------------------------------------------------------------- +__all__ = ['c_lexer', 'c_parser', 'c_ast'] +__version__ = '2.22' + +import io +from subprocess import check_output +from .c_parser import CParser + + +def preprocess_file(filename, cpp_path='cpp', cpp_args=''): + """ Preprocess a file using cpp. + + filename: + Name of the file you want to preprocess. + + cpp_path: + cpp_args: + Refer to the documentation of parse_file for the meaning of these + arguments. + + When successful, returns the preprocessed file's contents. + Errors from cpp will be printed out. + """ + path_list = [cpp_path] + if isinstance(cpp_args, list): + path_list += cpp_args + elif cpp_args != '': + path_list += [cpp_args] + path_list += [filename] + + try: + # Note the use of universal_newlines to treat all newlines + # as \n for Python's purpose + text = check_output(path_list, universal_newlines=True) + except OSError as e: + raise RuntimeError("Unable to invoke 'cpp'. " + + 'Make sure its path was passed correctly\n' + + ('Original error: %s' % e)) + + return text + + +def parse_file(filename, use_cpp=False, cpp_path='cpp', cpp_args='', + parser=None, encoding=None): + """ Parse a C file using pycparser. + + filename: + Name of the file you want to parse. + + use_cpp: + Set to True if you want to execute the C pre-processor + on the file prior to parsing it. + + cpp_path: + If use_cpp is True, this is the path to 'cpp' on your + system. If no path is provided, it attempts to just + execute 'cpp', so it must be in your PATH. + + cpp_args: + If use_cpp is True, set this to the command line arguments strings + to cpp. Be careful with quotes - it's best to pass a raw string + (r'') here. For example: + r'-I../utils/fake_libc_include' + If several arguments are required, pass a list of strings. + + encoding: + Encoding to use for the file to parse + + parser: + Optional parser object to be used instead of the default CParser + + When successful, an AST is returned. ParseError can be + thrown if the file doesn't parse successfully. + + Errors from cpp will be printed out. + """ + if use_cpp: + text = preprocess_file(filename, cpp_path, cpp_args) + else: + with io.open(filename, encoding=encoding) as f: + text = f.read() + + if parser is None: + parser = CParser() + return parser.parse(text, filename) diff --git a/templates/skills/file_manager/dependencies/pycparser/_ast_gen.py b/templates/skills/file_manager/dependencies/pycparser/_ast_gen.py new file mode 100644 index 00000000..0f7d330b --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/_ast_gen.py @@ -0,0 +1,336 @@ +#----------------------------------------------------------------- +# _ast_gen.py +# +# Generates the AST Node classes from a specification given in +# a configuration file +# +# The design of this module was inspired by astgen.py from the +# Python 2.5 code-base. +# +# Eli Bendersky [https://eli.thegreenplace.net/] +# License: BSD +#----------------------------------------------------------------- +from string import Template + + +class ASTCodeGenerator(object): + def __init__(self, cfg_filename='_c_ast.cfg'): + """ Initialize the code generator from a configuration + file. + """ + self.cfg_filename = cfg_filename + self.node_cfg = [NodeCfg(name, contents) + for (name, contents) in self.parse_cfgfile(cfg_filename)] + + def generate(self, file=None): + """ Generates the code into file, an open file buffer. + """ + src = Template(_PROLOGUE_COMMENT).substitute( + cfg_filename=self.cfg_filename) + + src += _PROLOGUE_CODE + for node_cfg in self.node_cfg: + src += node_cfg.generate_source() + '\n\n' + + file.write(src) + + def parse_cfgfile(self, filename): + """ Parse the configuration file and yield pairs of + (name, contents) for each node. + """ + with open(filename, "r") as f: + for line in f: + line = line.strip() + if not line or line.startswith('#'): + continue + colon_i = line.find(':') + lbracket_i = line.find('[') + rbracket_i = line.find(']') + if colon_i < 1 or lbracket_i <= colon_i or rbracket_i <= lbracket_i: + raise RuntimeError("Invalid line in %s:\n%s\n" % (filename, line)) + + name = line[:colon_i] + val = line[lbracket_i + 1:rbracket_i] + vallist = [v.strip() for v in val.split(',')] if val else [] + yield name, vallist + + +class NodeCfg(object): + """ Node configuration. + + name: node name + contents: a list of contents - attributes and child nodes + See comment at the top of the configuration file for details. + """ + + def __init__(self, name, contents): + self.name = name + self.all_entries = [] + self.attr = [] + self.child = [] + self.seq_child = [] + + for entry in contents: + clean_entry = entry.rstrip('*') + self.all_entries.append(clean_entry) + + if entry.endswith('**'): + self.seq_child.append(clean_entry) + elif entry.endswith('*'): + self.child.append(clean_entry) + else: + self.attr.append(entry) + + def generate_source(self): + src = self._gen_init() + src += '\n' + self._gen_children() + src += '\n' + self._gen_iter() + src += '\n' + self._gen_attr_names() + return src + + def _gen_init(self): + src = "class %s(Node):\n" % self.name + + if self.all_entries: + args = ', '.join(self.all_entries) + slots = ', '.join("'{0}'".format(e) for e in self.all_entries) + slots += ", 'coord', '__weakref__'" + arglist = '(self, %s, coord=None)' % args + else: + slots = "'coord', '__weakref__'" + arglist = '(self, coord=None)' + + src += " __slots__ = (%s)\n" % slots + src += " def __init__%s:\n" % arglist + + for name in self.all_entries + ['coord']: + src += " self.%s = %s\n" % (name, name) + + return src + + def _gen_children(self): + src = ' def children(self):\n' + + if self.all_entries: + src += ' nodelist = []\n' + + for child in self.child: + src += ( + ' if self.%(child)s is not None:' + + ' nodelist.append(("%(child)s", self.%(child)s))\n') % ( + dict(child=child)) + + for seq_child in self.seq_child: + src += ( + ' for i, child in enumerate(self.%(child)s or []):\n' + ' nodelist.append(("%(child)s[%%d]" %% i, child))\n') % ( + dict(child=seq_child)) + + src += ' return tuple(nodelist)\n' + else: + src += ' return ()\n' + + return src + + def _gen_iter(self): + src = ' def __iter__(self):\n' + + if self.all_entries: + for child in self.child: + src += ( + ' if self.%(child)s is not None:\n' + + ' yield self.%(child)s\n') % (dict(child=child)) + + for seq_child in self.seq_child: + src += ( + ' for child in (self.%(child)s or []):\n' + ' yield child\n') % (dict(child=seq_child)) + + if not (self.child or self.seq_child): + # Empty generator + src += ( + ' return\n' + + ' yield\n') + else: + # Empty generator + src += ( + ' return\n' + + ' yield\n') + + return src + + def _gen_attr_names(self): + src = " attr_names = (" + ''.join("%r, " % nm for nm in self.attr) + ')' + return src + + +_PROLOGUE_COMMENT = \ +r'''#----------------------------------------------------------------- +# ** ATTENTION ** +# This code was automatically generated from the file: +# $cfg_filename +# +# Do not modify it directly. Modify the configuration file and +# run the generator again. +# ** ** *** ** ** +# +# pycparser: c_ast.py +# +# AST Node classes. +# +# Eli Bendersky [https://eli.thegreenplace.net/] +# License: BSD +#----------------------------------------------------------------- + +''' + +_PROLOGUE_CODE = r''' +import sys + +def _repr(obj): + """ + Get the representation of an object, with dedicated pprint-like format for lists. + """ + if isinstance(obj, list): + return '[' + (',\n '.join((_repr(e).replace('\n', '\n ') for e in obj))) + '\n]' + else: + return repr(obj) + +class Node(object): + __slots__ = () + """ Abstract base class for AST nodes. + """ + def __repr__(self): + """ Generates a python representation of the current node + """ + result = self.__class__.__name__ + '(' + + indent = '' + separator = '' + for name in self.__slots__[:-2]: + result += separator + result += indent + result += name + '=' + (_repr(getattr(self, name)).replace('\n', '\n ' + (' ' * (len(name) + len(self.__class__.__name__))))) + + separator = ',' + indent = '\n ' + (' ' * len(self.__class__.__name__)) + + result += indent + ')' + + return result + + def children(self): + """ A sequence of all children that are Nodes + """ + pass + + def show(self, buf=sys.stdout, offset=0, attrnames=False, nodenames=False, showcoord=False, _my_node_name=None): + """ Pretty print the Node and all its attributes and + children (recursively) to a buffer. + + buf: + Open IO buffer into which the Node is printed. + + offset: + Initial offset (amount of leading spaces) + + attrnames: + True if you want to see the attribute names in + name=value pairs. False to only see the values. + + nodenames: + True if you want to see the actual node names + within their parents. + + showcoord: + Do you want the coordinates of each Node to be + displayed. + """ + lead = ' ' * offset + if nodenames and _my_node_name is not None: + buf.write(lead + self.__class__.__name__+ ' <' + _my_node_name + '>: ') + else: + buf.write(lead + self.__class__.__name__+ ': ') + + if self.attr_names: + if attrnames: + nvlist = [(n, getattr(self,n)) for n in self.attr_names] + attrstr = ', '.join('%s=%s' % nv for nv in nvlist) + else: + vlist = [getattr(self, n) for n in self.attr_names] + attrstr = ', '.join('%s' % v for v in vlist) + buf.write(attrstr) + + if showcoord: + buf.write(' (at %s)' % self.coord) + buf.write('\n') + + for (child_name, child) in self.children(): + child.show( + buf, + offset=offset + 2, + attrnames=attrnames, + nodenames=nodenames, + showcoord=showcoord, + _my_node_name=child_name) + + +class NodeVisitor(object): + """ A base NodeVisitor class for visiting c_ast nodes. + Subclass it and define your own visit_XXX methods, where + XXX is the class name you want to visit with these + methods. + + For example: + + class ConstantVisitor(NodeVisitor): + def __init__(self): + self.values = [] + + def visit_Constant(self, node): + self.values.append(node.value) + + Creates a list of values of all the constant nodes + encountered below the given node. To use it: + + cv = ConstantVisitor() + cv.visit(node) + + Notes: + + * generic_visit() will be called for AST nodes for which + no visit_XXX method was defined. + * The children of nodes for which a visit_XXX was + defined will not be visited - if you need this, call + generic_visit() on the node. + You can use: + NodeVisitor.generic_visit(self, node) + * Modeled after Python's own AST visiting facilities + (the ast module of Python 3.0) + """ + + _method_cache = None + + def visit(self, node): + """ Visit a node. + """ + + if self._method_cache is None: + self._method_cache = {} + + visitor = self._method_cache.get(node.__class__.__name__, None) + if visitor is None: + method = 'visit_' + node.__class__.__name__ + visitor = getattr(self, method, self.generic_visit) + self._method_cache[node.__class__.__name__] = visitor + + return visitor(node) + + def generic_visit(self, node): + """ Called if no explicit visitor function exists for a + node. Implements preorder visiting of the node. + """ + for c in node: + self.visit(c) + +''' diff --git a/templates/skills/file_manager/dependencies/pycparser/_build_tables.py b/templates/skills/file_manager/dependencies/pycparser/_build_tables.py new file mode 100644 index 00000000..4f371079 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/_build_tables.py @@ -0,0 +1,40 @@ +#----------------------------------------------------------------- +# pycparser: _build_tables.py +# +# A dummy for generating the lexing/parsing tables and and +# compiling them into .pyc for faster execution in optimized mode. +# Also generates AST code from the configuration file. +# Should be called from the pycparser directory. +# +# Eli Bendersky [https://eli.thegreenplace.net/] +# License: BSD +#----------------------------------------------------------------- + +# Insert '.' and '..' as first entries to the search path for modules. +# Restricted environments like embeddable python do not include the +# current working directory on startup. +import importlib +import sys +sys.path[0:0] = ['.', '..'] + +# Generate c_ast.py +from _ast_gen import ASTCodeGenerator +ast_gen = ASTCodeGenerator('_c_ast.cfg') +ast_gen.generate(open('c_ast.py', 'w')) + +from pycparser import c_parser + +# Generates the tables +# +c_parser.CParser( + lex_optimize=True, + yacc_debug=False, + yacc_optimize=True) + +# Load to compile into .pyc +# +importlib.invalidate_caches() + +import lextab +import yacctab +import c_ast diff --git a/templates/skills/file_manager/dependencies/pycparser/_c_ast.cfg b/templates/skills/file_manager/dependencies/pycparser/_c_ast.cfg new file mode 100644 index 00000000..0626533e --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/_c_ast.cfg @@ -0,0 +1,195 @@ +#----------------------------------------------------------------- +# pycparser: _c_ast.cfg +# +# Defines the AST Node classes used in pycparser. +# +# Each entry is a Node sub-class name, listing the attributes +# and child nodes of the class: +# * - a child node +# ** - a sequence of child nodes +# - an attribute +# +# Eli Bendersky [https://eli.thegreenplace.net/] +# License: BSD +#----------------------------------------------------------------- + +# ArrayDecl is a nested declaration of an array with the given type. +# dim: the dimension (for example, constant 42) +# dim_quals: list of dimension qualifiers, to support C99's allowing 'const' +# and 'static' within the array dimension in function declarations. +ArrayDecl: [type*, dim*, dim_quals] + +ArrayRef: [name*, subscript*] + +# op: =, +=, /= etc. +# +Assignment: [op, lvalue*, rvalue*] + +Alignas: [alignment*] + +BinaryOp: [op, left*, right*] + +Break: [] + +Case: [expr*, stmts**] + +Cast: [to_type*, expr*] + +# Compound statement in C99 is a list of block items (declarations or +# statements). +# +Compound: [block_items**] + +# Compound literal (anonymous aggregate) for C99. +# (type-name) {initializer_list} +# type: the typename +# init: InitList for the initializer list +# +CompoundLiteral: [type*, init*] + +# type: int, char, float, string, etc. +# +Constant: [type, value] + +Continue: [] + +# name: the variable being declared +# quals: list of qualifiers (const, volatile) +# funcspec: list function specifiers (i.e. inline in C99) +# storage: list of storage specifiers (extern, register, etc.) +# type: declaration type (probably nested with all the modifiers) +# init: initialization value, or None +# bitsize: bit field size, or None +# +Decl: [name, quals, align, storage, funcspec, type*, init*, bitsize*] + +DeclList: [decls**] + +Default: [stmts**] + +DoWhile: [cond*, stmt*] + +# Represents the ellipsis (...) parameter in a function +# declaration +# +EllipsisParam: [] + +# An empty statement (a semicolon ';' on its own) +# +EmptyStatement: [] + +# Enumeration type specifier +# name: an optional ID +# values: an EnumeratorList +# +Enum: [name, values*] + +# A name/value pair for enumeration values +# +Enumerator: [name, value*] + +# A list of enumerators +# +EnumeratorList: [enumerators**] + +# A list of expressions separated by the comma operator. +# +ExprList: [exprs**] + +# This is the top of the AST, representing a single C file (a +# translation unit in K&R jargon). It contains a list of +# "external-declaration"s, which is either declarations (Decl), +# Typedef or function definitions (FuncDef). +# +FileAST: [ext**] + +# for (init; cond; next) stmt +# +For: [init*, cond*, next*, stmt*] + +# name: Id +# args: ExprList +# +FuncCall: [name*, args*] + +# type (args) +# +FuncDecl: [args*, type*] + +# Function definition: a declarator for the function name and +# a body, which is a compound statement. +# There's an optional list of parameter declarations for old +# K&R-style definitions +# +FuncDef: [decl*, param_decls**, body*] + +Goto: [name] + +ID: [name] + +# Holder for types that are a simple identifier (e.g. the built +# ins void, char etc. and typedef-defined types) +# +IdentifierType: [names] + +If: [cond*, iftrue*, iffalse*] + +# An initialization list used for compound literals. +# +InitList: [exprs**] + +Label: [name, stmt*] + +# A named initializer for C99. +# The name of a NamedInitializer is a sequence of Nodes, because +# names can be hierarchical and contain constant expressions. +# +NamedInitializer: [name**, expr*] + +# a list of comma separated function parameter declarations +# +ParamList: [params**] + +PtrDecl: [quals, type*] + +Return: [expr*] + +StaticAssert: [cond*, message*] + +# name: struct tag name +# decls: declaration of members +# +Struct: [name, decls**] + +# type: . or -> +# name.field or name->field +# +StructRef: [name*, type, field*] + +Switch: [cond*, stmt*] + +# cond ? iftrue : iffalse +# +TernaryOp: [cond*, iftrue*, iffalse*] + +# A base type declaration +# +TypeDecl: [declname, quals, align, type*] + +# A typedef declaration. +# Very similar to Decl, but without some attributes +# +Typedef: [name, quals, storage, type*] + +Typename: [name, quals, align, type*] + +UnaryOp: [op, expr*] + +# name: union tag name +# decls: declaration of members +# +Union: [name, decls**] + +While: [cond*, stmt*] + +Pragma: [string] diff --git a/templates/skills/file_manager/dependencies/pycparser/ast_transforms.py b/templates/skills/file_manager/dependencies/pycparser/ast_transforms.py new file mode 100644 index 00000000..367dcf54 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/ast_transforms.py @@ -0,0 +1,164 @@ +#------------------------------------------------------------------------------ +# pycparser: ast_transforms.py +# +# Some utilities used by the parser to create a friendlier AST. +# +# Eli Bendersky [https://eli.thegreenplace.net/] +# License: BSD +#------------------------------------------------------------------------------ + +from . import c_ast + + +def fix_switch_cases(switch_node): + """ The 'case' statements in a 'switch' come out of parsing with one + child node, so subsequent statements are just tucked to the parent + Compound. Additionally, consecutive (fall-through) case statements + come out messy. This is a peculiarity of the C grammar. The following: + + switch (myvar) { + case 10: + k = 10; + p = k + 1; + return 10; + case 20: + case 30: + return 20; + default: + break; + } + + Creates this tree (pseudo-dump): + + Switch + ID: myvar + Compound: + Case 10: + k = 10 + p = k + 1 + return 10 + Case 20: + Case 30: + return 20 + Default: + break + + The goal of this transform is to fix this mess, turning it into the + following: + + Switch + ID: myvar + Compound: + Case 10: + k = 10 + p = k + 1 + return 10 + Case 20: + Case 30: + return 20 + Default: + break + + A fixed AST node is returned. The argument may be modified. + """ + assert isinstance(switch_node, c_ast.Switch) + if not isinstance(switch_node.stmt, c_ast.Compound): + return switch_node + + # The new Compound child for the Switch, which will collect children in the + # correct order + new_compound = c_ast.Compound([], switch_node.stmt.coord) + + # The last Case/Default node + last_case = None + + # Goes over the children of the Compound below the Switch, adding them + # either directly below new_compound or below the last Case as appropriate + # (for `switch(cond) {}`, block_items would have been None) + for child in (switch_node.stmt.block_items or []): + if isinstance(child, (c_ast.Case, c_ast.Default)): + # If it's a Case/Default: + # 1. Add it to the Compound and mark as "last case" + # 2. If its immediate child is also a Case or Default, promote it + # to a sibling. + new_compound.block_items.append(child) + _extract_nested_case(child, new_compound.block_items) + last_case = new_compound.block_items[-1] + else: + # Other statements are added as children to the last case, if it + # exists. + if last_case is None: + new_compound.block_items.append(child) + else: + last_case.stmts.append(child) + + switch_node.stmt = new_compound + return switch_node + + +def _extract_nested_case(case_node, stmts_list): + """ Recursively extract consecutive Case statements that are made nested + by the parser and add them to the stmts_list. + """ + if isinstance(case_node.stmts[0], (c_ast.Case, c_ast.Default)): + stmts_list.append(case_node.stmts.pop()) + _extract_nested_case(stmts_list[-1], stmts_list) + + +def fix_atomic_specifiers(decl): + """ Atomic specifiers like _Atomic(type) are unusually structured, + conferring a qualifier upon the contained type. + + This function fixes a decl with atomic specifiers to have a sane AST + structure, by removing spurious Typename->TypeDecl pairs and attaching + the _Atomic qualifier in the right place. + """ + # There can be multiple levels of _Atomic in a decl; fix them until a + # fixed point is reached. + while True: + decl, found = _fix_atomic_specifiers_once(decl) + if not found: + break + + # Make sure to add an _Atomic qual on the topmost decl if needed. Also + # restore the declname on the innermost TypeDecl (it gets placed in the + # wrong place during construction). + typ = decl + while not isinstance(typ, c_ast.TypeDecl): + try: + typ = typ.type + except AttributeError: + return decl + if '_Atomic' in typ.quals and '_Atomic' not in decl.quals: + decl.quals.append('_Atomic') + if typ.declname is None: + typ.declname = decl.name + + return decl + + +def _fix_atomic_specifiers_once(decl): + """ Performs one 'fix' round of atomic specifiers. + Returns (modified_decl, found) where found is True iff a fix was made. + """ + parent = decl + grandparent = None + node = decl.type + while node is not None: + if isinstance(node, c_ast.Typename) and '_Atomic' in node.quals: + break + try: + grandparent = parent + parent = node + node = node.type + except AttributeError: + # If we've reached a node without a `type` field, it means we won't + # find what we're looking for at this point; give up the search + # and return the original decl unmodified. + return decl, False + + assert isinstance(parent, c_ast.TypeDecl) + grandparent.type = node.type + if '_Atomic' not in node.type.quals: + node.type.quals.append('_Atomic') + return decl, True diff --git a/templates/skills/file_manager/dependencies/pycparser/c_ast.py b/templates/skills/file_manager/dependencies/pycparser/c_ast.py new file mode 100644 index 00000000..6575a2ad --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/c_ast.py @@ -0,0 +1,1125 @@ +#----------------------------------------------------------------- +# ** ATTENTION ** +# This code was automatically generated from the file: +# _c_ast.cfg +# +# Do not modify it directly. Modify the configuration file and +# run the generator again. +# ** ** *** ** ** +# +# pycparser: c_ast.py +# +# AST Node classes. +# +# Eli Bendersky [https://eli.thegreenplace.net/] +# License: BSD +#----------------------------------------------------------------- + + +import sys + +def _repr(obj): + """ + Get the representation of an object, with dedicated pprint-like format for lists. + """ + if isinstance(obj, list): + return '[' + (',\n '.join((_repr(e).replace('\n', '\n ') for e in obj))) + '\n]' + else: + return repr(obj) + +class Node(object): + __slots__ = () + """ Abstract base class for AST nodes. + """ + def __repr__(self): + """ Generates a python representation of the current node + """ + result = self.__class__.__name__ + '(' + + indent = '' + separator = '' + for name in self.__slots__[:-2]: + result += separator + result += indent + result += name + '=' + (_repr(getattr(self, name)).replace('\n', '\n ' + (' ' * (len(name) + len(self.__class__.__name__))))) + + separator = ',' + indent = '\n ' + (' ' * len(self.__class__.__name__)) + + result += indent + ')' + + return result + + def children(self): + """ A sequence of all children that are Nodes + """ + pass + + def show(self, buf=sys.stdout, offset=0, attrnames=False, nodenames=False, showcoord=False, _my_node_name=None): + """ Pretty print the Node and all its attributes and + children (recursively) to a buffer. + + buf: + Open IO buffer into which the Node is printed. + + offset: + Initial offset (amount of leading spaces) + + attrnames: + True if you want to see the attribute names in + name=value pairs. False to only see the values. + + nodenames: + True if you want to see the actual node names + within their parents. + + showcoord: + Do you want the coordinates of each Node to be + displayed. + """ + lead = ' ' * offset + if nodenames and _my_node_name is not None: + buf.write(lead + self.__class__.__name__+ ' <' + _my_node_name + '>: ') + else: + buf.write(lead + self.__class__.__name__+ ': ') + + if self.attr_names: + if attrnames: + nvlist = [(n, getattr(self,n)) for n in self.attr_names] + attrstr = ', '.join('%s=%s' % nv for nv in nvlist) + else: + vlist = [getattr(self, n) for n in self.attr_names] + attrstr = ', '.join('%s' % v for v in vlist) + buf.write(attrstr) + + if showcoord: + buf.write(' (at %s)' % self.coord) + buf.write('\n') + + for (child_name, child) in self.children(): + child.show( + buf, + offset=offset + 2, + attrnames=attrnames, + nodenames=nodenames, + showcoord=showcoord, + _my_node_name=child_name) + + +class NodeVisitor(object): + """ A base NodeVisitor class for visiting c_ast nodes. + Subclass it and define your own visit_XXX methods, where + XXX is the class name you want to visit with these + methods. + + For example: + + class ConstantVisitor(NodeVisitor): + def __init__(self): + self.values = [] + + def visit_Constant(self, node): + self.values.append(node.value) + + Creates a list of values of all the constant nodes + encountered below the given node. To use it: + + cv = ConstantVisitor() + cv.visit(node) + + Notes: + + * generic_visit() will be called for AST nodes for which + no visit_XXX method was defined. + * The children of nodes for which a visit_XXX was + defined will not be visited - if you need this, call + generic_visit() on the node. + You can use: + NodeVisitor.generic_visit(self, node) + * Modeled after Python's own AST visiting facilities + (the ast module of Python 3.0) + """ + + _method_cache = None + + def visit(self, node): + """ Visit a node. + """ + + if self._method_cache is None: + self._method_cache = {} + + visitor = self._method_cache.get(node.__class__.__name__, None) + if visitor is None: + method = 'visit_' + node.__class__.__name__ + visitor = getattr(self, method, self.generic_visit) + self._method_cache[node.__class__.__name__] = visitor + + return visitor(node) + + def generic_visit(self, node): + """ Called if no explicit visitor function exists for a + node. Implements preorder visiting of the node. + """ + for c in node: + self.visit(c) + +class ArrayDecl(Node): + __slots__ = ('type', 'dim', 'dim_quals', 'coord', '__weakref__') + def __init__(self, type, dim, dim_quals, coord=None): + self.type = type + self.dim = dim + self.dim_quals = dim_quals + self.coord = coord + + def children(self): + nodelist = [] + if self.type is not None: nodelist.append(("type", self.type)) + if self.dim is not None: nodelist.append(("dim", self.dim)) + return tuple(nodelist) + + def __iter__(self): + if self.type is not None: + yield self.type + if self.dim is not None: + yield self.dim + + attr_names = ('dim_quals', ) + +class ArrayRef(Node): + __slots__ = ('name', 'subscript', 'coord', '__weakref__') + def __init__(self, name, subscript, coord=None): + self.name = name + self.subscript = subscript + self.coord = coord + + def children(self): + nodelist = [] + if self.name is not None: nodelist.append(("name", self.name)) + if self.subscript is not None: nodelist.append(("subscript", self.subscript)) + return tuple(nodelist) + + def __iter__(self): + if self.name is not None: + yield self.name + if self.subscript is not None: + yield self.subscript + + attr_names = () + +class Assignment(Node): + __slots__ = ('op', 'lvalue', 'rvalue', 'coord', '__weakref__') + def __init__(self, op, lvalue, rvalue, coord=None): + self.op = op + self.lvalue = lvalue + self.rvalue = rvalue + self.coord = coord + + def children(self): + nodelist = [] + if self.lvalue is not None: nodelist.append(("lvalue", self.lvalue)) + if self.rvalue is not None: nodelist.append(("rvalue", self.rvalue)) + return tuple(nodelist) + + def __iter__(self): + if self.lvalue is not None: + yield self.lvalue + if self.rvalue is not None: + yield self.rvalue + + attr_names = ('op', ) + +class Alignas(Node): + __slots__ = ('alignment', 'coord', '__weakref__') + def __init__(self, alignment, coord=None): + self.alignment = alignment + self.coord = coord + + def children(self): + nodelist = [] + if self.alignment is not None: nodelist.append(("alignment", self.alignment)) + return tuple(nodelist) + + def __iter__(self): + if self.alignment is not None: + yield self.alignment + + attr_names = () + +class BinaryOp(Node): + __slots__ = ('op', 'left', 'right', 'coord', '__weakref__') + def __init__(self, op, left, right, coord=None): + self.op = op + self.left = left + self.right = right + self.coord = coord + + def children(self): + nodelist = [] + if self.left is not None: nodelist.append(("left", self.left)) + if self.right is not None: nodelist.append(("right", self.right)) + return tuple(nodelist) + + def __iter__(self): + if self.left is not None: + yield self.left + if self.right is not None: + yield self.right + + attr_names = ('op', ) + +class Break(Node): + __slots__ = ('coord', '__weakref__') + def __init__(self, coord=None): + self.coord = coord + + def children(self): + return () + + def __iter__(self): + return + yield + + attr_names = () + +class Case(Node): + __slots__ = ('expr', 'stmts', 'coord', '__weakref__') + def __init__(self, expr, stmts, coord=None): + self.expr = expr + self.stmts = stmts + self.coord = coord + + def children(self): + nodelist = [] + if self.expr is not None: nodelist.append(("expr", self.expr)) + for i, child in enumerate(self.stmts or []): + nodelist.append(("stmts[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + if self.expr is not None: + yield self.expr + for child in (self.stmts or []): + yield child + + attr_names = () + +class Cast(Node): + __slots__ = ('to_type', 'expr', 'coord', '__weakref__') + def __init__(self, to_type, expr, coord=None): + self.to_type = to_type + self.expr = expr + self.coord = coord + + def children(self): + nodelist = [] + if self.to_type is not None: nodelist.append(("to_type", self.to_type)) + if self.expr is not None: nodelist.append(("expr", self.expr)) + return tuple(nodelist) + + def __iter__(self): + if self.to_type is not None: + yield self.to_type + if self.expr is not None: + yield self.expr + + attr_names = () + +class Compound(Node): + __slots__ = ('block_items', 'coord', '__weakref__') + def __init__(self, block_items, coord=None): + self.block_items = block_items + self.coord = coord + + def children(self): + nodelist = [] + for i, child in enumerate(self.block_items or []): + nodelist.append(("block_items[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + for child in (self.block_items or []): + yield child + + attr_names = () + +class CompoundLiteral(Node): + __slots__ = ('type', 'init', 'coord', '__weakref__') + def __init__(self, type, init, coord=None): + self.type = type + self.init = init + self.coord = coord + + def children(self): + nodelist = [] + if self.type is not None: nodelist.append(("type", self.type)) + if self.init is not None: nodelist.append(("init", self.init)) + return tuple(nodelist) + + def __iter__(self): + if self.type is not None: + yield self.type + if self.init is not None: + yield self.init + + attr_names = () + +class Constant(Node): + __slots__ = ('type', 'value', 'coord', '__weakref__') + def __init__(self, type, value, coord=None): + self.type = type + self.value = value + self.coord = coord + + def children(self): + nodelist = [] + return tuple(nodelist) + + def __iter__(self): + return + yield + + attr_names = ('type', 'value', ) + +class Continue(Node): + __slots__ = ('coord', '__weakref__') + def __init__(self, coord=None): + self.coord = coord + + def children(self): + return () + + def __iter__(self): + return + yield + + attr_names = () + +class Decl(Node): + __slots__ = ('name', 'quals', 'align', 'storage', 'funcspec', 'type', 'init', 'bitsize', 'coord', '__weakref__') + def __init__(self, name, quals, align, storage, funcspec, type, init, bitsize, coord=None): + self.name = name + self.quals = quals + self.align = align + self.storage = storage + self.funcspec = funcspec + self.type = type + self.init = init + self.bitsize = bitsize + self.coord = coord + + def children(self): + nodelist = [] + if self.type is not None: nodelist.append(("type", self.type)) + if self.init is not None: nodelist.append(("init", self.init)) + if self.bitsize is not None: nodelist.append(("bitsize", self.bitsize)) + return tuple(nodelist) + + def __iter__(self): + if self.type is not None: + yield self.type + if self.init is not None: + yield self.init + if self.bitsize is not None: + yield self.bitsize + + attr_names = ('name', 'quals', 'align', 'storage', 'funcspec', ) + +class DeclList(Node): + __slots__ = ('decls', 'coord', '__weakref__') + def __init__(self, decls, coord=None): + self.decls = decls + self.coord = coord + + def children(self): + nodelist = [] + for i, child in enumerate(self.decls or []): + nodelist.append(("decls[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + for child in (self.decls or []): + yield child + + attr_names = () + +class Default(Node): + __slots__ = ('stmts', 'coord', '__weakref__') + def __init__(self, stmts, coord=None): + self.stmts = stmts + self.coord = coord + + def children(self): + nodelist = [] + for i, child in enumerate(self.stmts or []): + nodelist.append(("stmts[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + for child in (self.stmts or []): + yield child + + attr_names = () + +class DoWhile(Node): + __slots__ = ('cond', 'stmt', 'coord', '__weakref__') + def __init__(self, cond, stmt, coord=None): + self.cond = cond + self.stmt = stmt + self.coord = coord + + def children(self): + nodelist = [] + if self.cond is not None: nodelist.append(("cond", self.cond)) + if self.stmt is not None: nodelist.append(("stmt", self.stmt)) + return tuple(nodelist) + + def __iter__(self): + if self.cond is not None: + yield self.cond + if self.stmt is not None: + yield self.stmt + + attr_names = () + +class EllipsisParam(Node): + __slots__ = ('coord', '__weakref__') + def __init__(self, coord=None): + self.coord = coord + + def children(self): + return () + + def __iter__(self): + return + yield + + attr_names = () + +class EmptyStatement(Node): + __slots__ = ('coord', '__weakref__') + def __init__(self, coord=None): + self.coord = coord + + def children(self): + return () + + def __iter__(self): + return + yield + + attr_names = () + +class Enum(Node): + __slots__ = ('name', 'values', 'coord', '__weakref__') + def __init__(self, name, values, coord=None): + self.name = name + self.values = values + self.coord = coord + + def children(self): + nodelist = [] + if self.values is not None: nodelist.append(("values", self.values)) + return tuple(nodelist) + + def __iter__(self): + if self.values is not None: + yield self.values + + attr_names = ('name', ) + +class Enumerator(Node): + __slots__ = ('name', 'value', 'coord', '__weakref__') + def __init__(self, name, value, coord=None): + self.name = name + self.value = value + self.coord = coord + + def children(self): + nodelist = [] + if self.value is not None: nodelist.append(("value", self.value)) + return tuple(nodelist) + + def __iter__(self): + if self.value is not None: + yield self.value + + attr_names = ('name', ) + +class EnumeratorList(Node): + __slots__ = ('enumerators', 'coord', '__weakref__') + def __init__(self, enumerators, coord=None): + self.enumerators = enumerators + self.coord = coord + + def children(self): + nodelist = [] + for i, child in enumerate(self.enumerators or []): + nodelist.append(("enumerators[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + for child in (self.enumerators or []): + yield child + + attr_names = () + +class ExprList(Node): + __slots__ = ('exprs', 'coord', '__weakref__') + def __init__(self, exprs, coord=None): + self.exprs = exprs + self.coord = coord + + def children(self): + nodelist = [] + for i, child in enumerate(self.exprs or []): + nodelist.append(("exprs[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + for child in (self.exprs or []): + yield child + + attr_names = () + +class FileAST(Node): + __slots__ = ('ext', 'coord', '__weakref__') + def __init__(self, ext, coord=None): + self.ext = ext + self.coord = coord + + def children(self): + nodelist = [] + for i, child in enumerate(self.ext or []): + nodelist.append(("ext[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + for child in (self.ext or []): + yield child + + attr_names = () + +class For(Node): + __slots__ = ('init', 'cond', 'next', 'stmt', 'coord', '__weakref__') + def __init__(self, init, cond, next, stmt, coord=None): + self.init = init + self.cond = cond + self.next = next + self.stmt = stmt + self.coord = coord + + def children(self): + nodelist = [] + if self.init is not None: nodelist.append(("init", self.init)) + if self.cond is not None: nodelist.append(("cond", self.cond)) + if self.next is not None: nodelist.append(("next", self.next)) + if self.stmt is not None: nodelist.append(("stmt", self.stmt)) + return tuple(nodelist) + + def __iter__(self): + if self.init is not None: + yield self.init + if self.cond is not None: + yield self.cond + if self.next is not None: + yield self.next + if self.stmt is not None: + yield self.stmt + + attr_names = () + +class FuncCall(Node): + __slots__ = ('name', 'args', 'coord', '__weakref__') + def __init__(self, name, args, coord=None): + self.name = name + self.args = args + self.coord = coord + + def children(self): + nodelist = [] + if self.name is not None: nodelist.append(("name", self.name)) + if self.args is not None: nodelist.append(("args", self.args)) + return tuple(nodelist) + + def __iter__(self): + if self.name is not None: + yield self.name + if self.args is not None: + yield self.args + + attr_names = () + +class FuncDecl(Node): + __slots__ = ('args', 'type', 'coord', '__weakref__') + def __init__(self, args, type, coord=None): + self.args = args + self.type = type + self.coord = coord + + def children(self): + nodelist = [] + if self.args is not None: nodelist.append(("args", self.args)) + if self.type is not None: nodelist.append(("type", self.type)) + return tuple(nodelist) + + def __iter__(self): + if self.args is not None: + yield self.args + if self.type is not None: + yield self.type + + attr_names = () + +class FuncDef(Node): + __slots__ = ('decl', 'param_decls', 'body', 'coord', '__weakref__') + def __init__(self, decl, param_decls, body, coord=None): + self.decl = decl + self.param_decls = param_decls + self.body = body + self.coord = coord + + def children(self): + nodelist = [] + if self.decl is not None: nodelist.append(("decl", self.decl)) + if self.body is not None: nodelist.append(("body", self.body)) + for i, child in enumerate(self.param_decls or []): + nodelist.append(("param_decls[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + if self.decl is not None: + yield self.decl + if self.body is not None: + yield self.body + for child in (self.param_decls or []): + yield child + + attr_names = () + +class Goto(Node): + __slots__ = ('name', 'coord', '__weakref__') + def __init__(self, name, coord=None): + self.name = name + self.coord = coord + + def children(self): + nodelist = [] + return tuple(nodelist) + + def __iter__(self): + return + yield + + attr_names = ('name', ) + +class ID(Node): + __slots__ = ('name', 'coord', '__weakref__') + def __init__(self, name, coord=None): + self.name = name + self.coord = coord + + def children(self): + nodelist = [] + return tuple(nodelist) + + def __iter__(self): + return + yield + + attr_names = ('name', ) + +class IdentifierType(Node): + __slots__ = ('names', 'coord', '__weakref__') + def __init__(self, names, coord=None): + self.names = names + self.coord = coord + + def children(self): + nodelist = [] + return tuple(nodelist) + + def __iter__(self): + return + yield + + attr_names = ('names', ) + +class If(Node): + __slots__ = ('cond', 'iftrue', 'iffalse', 'coord', '__weakref__') + def __init__(self, cond, iftrue, iffalse, coord=None): + self.cond = cond + self.iftrue = iftrue + self.iffalse = iffalse + self.coord = coord + + def children(self): + nodelist = [] + if self.cond is not None: nodelist.append(("cond", self.cond)) + if self.iftrue is not None: nodelist.append(("iftrue", self.iftrue)) + if self.iffalse is not None: nodelist.append(("iffalse", self.iffalse)) + return tuple(nodelist) + + def __iter__(self): + if self.cond is not None: + yield self.cond + if self.iftrue is not None: + yield self.iftrue + if self.iffalse is not None: + yield self.iffalse + + attr_names = () + +class InitList(Node): + __slots__ = ('exprs', 'coord', '__weakref__') + def __init__(self, exprs, coord=None): + self.exprs = exprs + self.coord = coord + + def children(self): + nodelist = [] + for i, child in enumerate(self.exprs or []): + nodelist.append(("exprs[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + for child in (self.exprs or []): + yield child + + attr_names = () + +class Label(Node): + __slots__ = ('name', 'stmt', 'coord', '__weakref__') + def __init__(self, name, stmt, coord=None): + self.name = name + self.stmt = stmt + self.coord = coord + + def children(self): + nodelist = [] + if self.stmt is not None: nodelist.append(("stmt", self.stmt)) + return tuple(nodelist) + + def __iter__(self): + if self.stmt is not None: + yield self.stmt + + attr_names = ('name', ) + +class NamedInitializer(Node): + __slots__ = ('name', 'expr', 'coord', '__weakref__') + def __init__(self, name, expr, coord=None): + self.name = name + self.expr = expr + self.coord = coord + + def children(self): + nodelist = [] + if self.expr is not None: nodelist.append(("expr", self.expr)) + for i, child in enumerate(self.name or []): + nodelist.append(("name[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + if self.expr is not None: + yield self.expr + for child in (self.name or []): + yield child + + attr_names = () + +class ParamList(Node): + __slots__ = ('params', 'coord', '__weakref__') + def __init__(self, params, coord=None): + self.params = params + self.coord = coord + + def children(self): + nodelist = [] + for i, child in enumerate(self.params or []): + nodelist.append(("params[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + for child in (self.params or []): + yield child + + attr_names = () + +class PtrDecl(Node): + __slots__ = ('quals', 'type', 'coord', '__weakref__') + def __init__(self, quals, type, coord=None): + self.quals = quals + self.type = type + self.coord = coord + + def children(self): + nodelist = [] + if self.type is not None: nodelist.append(("type", self.type)) + return tuple(nodelist) + + def __iter__(self): + if self.type is not None: + yield self.type + + attr_names = ('quals', ) + +class Return(Node): + __slots__ = ('expr', 'coord', '__weakref__') + def __init__(self, expr, coord=None): + self.expr = expr + self.coord = coord + + def children(self): + nodelist = [] + if self.expr is not None: nodelist.append(("expr", self.expr)) + return tuple(nodelist) + + def __iter__(self): + if self.expr is not None: + yield self.expr + + attr_names = () + +class StaticAssert(Node): + __slots__ = ('cond', 'message', 'coord', '__weakref__') + def __init__(self, cond, message, coord=None): + self.cond = cond + self.message = message + self.coord = coord + + def children(self): + nodelist = [] + if self.cond is not None: nodelist.append(("cond", self.cond)) + if self.message is not None: nodelist.append(("message", self.message)) + return tuple(nodelist) + + def __iter__(self): + if self.cond is not None: + yield self.cond + if self.message is not None: + yield self.message + + attr_names = () + +class Struct(Node): + __slots__ = ('name', 'decls', 'coord', '__weakref__') + def __init__(self, name, decls, coord=None): + self.name = name + self.decls = decls + self.coord = coord + + def children(self): + nodelist = [] + for i, child in enumerate(self.decls or []): + nodelist.append(("decls[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + for child in (self.decls or []): + yield child + + attr_names = ('name', ) + +class StructRef(Node): + __slots__ = ('name', 'type', 'field', 'coord', '__weakref__') + def __init__(self, name, type, field, coord=None): + self.name = name + self.type = type + self.field = field + self.coord = coord + + def children(self): + nodelist = [] + if self.name is not None: nodelist.append(("name", self.name)) + if self.field is not None: nodelist.append(("field", self.field)) + return tuple(nodelist) + + def __iter__(self): + if self.name is not None: + yield self.name + if self.field is not None: + yield self.field + + attr_names = ('type', ) + +class Switch(Node): + __slots__ = ('cond', 'stmt', 'coord', '__weakref__') + def __init__(self, cond, stmt, coord=None): + self.cond = cond + self.stmt = stmt + self.coord = coord + + def children(self): + nodelist = [] + if self.cond is not None: nodelist.append(("cond", self.cond)) + if self.stmt is not None: nodelist.append(("stmt", self.stmt)) + return tuple(nodelist) + + def __iter__(self): + if self.cond is not None: + yield self.cond + if self.stmt is not None: + yield self.stmt + + attr_names = () + +class TernaryOp(Node): + __slots__ = ('cond', 'iftrue', 'iffalse', 'coord', '__weakref__') + def __init__(self, cond, iftrue, iffalse, coord=None): + self.cond = cond + self.iftrue = iftrue + self.iffalse = iffalse + self.coord = coord + + def children(self): + nodelist = [] + if self.cond is not None: nodelist.append(("cond", self.cond)) + if self.iftrue is not None: nodelist.append(("iftrue", self.iftrue)) + if self.iffalse is not None: nodelist.append(("iffalse", self.iffalse)) + return tuple(nodelist) + + def __iter__(self): + if self.cond is not None: + yield self.cond + if self.iftrue is not None: + yield self.iftrue + if self.iffalse is not None: + yield self.iffalse + + attr_names = () + +class TypeDecl(Node): + __slots__ = ('declname', 'quals', 'align', 'type', 'coord', '__weakref__') + def __init__(self, declname, quals, align, type, coord=None): + self.declname = declname + self.quals = quals + self.align = align + self.type = type + self.coord = coord + + def children(self): + nodelist = [] + if self.type is not None: nodelist.append(("type", self.type)) + return tuple(nodelist) + + def __iter__(self): + if self.type is not None: + yield self.type + + attr_names = ('declname', 'quals', 'align', ) + +class Typedef(Node): + __slots__ = ('name', 'quals', 'storage', 'type', 'coord', '__weakref__') + def __init__(self, name, quals, storage, type, coord=None): + self.name = name + self.quals = quals + self.storage = storage + self.type = type + self.coord = coord + + def children(self): + nodelist = [] + if self.type is not None: nodelist.append(("type", self.type)) + return tuple(nodelist) + + def __iter__(self): + if self.type is not None: + yield self.type + + attr_names = ('name', 'quals', 'storage', ) + +class Typename(Node): + __slots__ = ('name', 'quals', 'align', 'type', 'coord', '__weakref__') + def __init__(self, name, quals, align, type, coord=None): + self.name = name + self.quals = quals + self.align = align + self.type = type + self.coord = coord + + def children(self): + nodelist = [] + if self.type is not None: nodelist.append(("type", self.type)) + return tuple(nodelist) + + def __iter__(self): + if self.type is not None: + yield self.type + + attr_names = ('name', 'quals', 'align', ) + +class UnaryOp(Node): + __slots__ = ('op', 'expr', 'coord', '__weakref__') + def __init__(self, op, expr, coord=None): + self.op = op + self.expr = expr + self.coord = coord + + def children(self): + nodelist = [] + if self.expr is not None: nodelist.append(("expr", self.expr)) + return tuple(nodelist) + + def __iter__(self): + if self.expr is not None: + yield self.expr + + attr_names = ('op', ) + +class Union(Node): + __slots__ = ('name', 'decls', 'coord', '__weakref__') + def __init__(self, name, decls, coord=None): + self.name = name + self.decls = decls + self.coord = coord + + def children(self): + nodelist = [] + for i, child in enumerate(self.decls or []): + nodelist.append(("decls[%d]" % i, child)) + return tuple(nodelist) + + def __iter__(self): + for child in (self.decls or []): + yield child + + attr_names = ('name', ) + +class While(Node): + __slots__ = ('cond', 'stmt', 'coord', '__weakref__') + def __init__(self, cond, stmt, coord=None): + self.cond = cond + self.stmt = stmt + self.coord = coord + + def children(self): + nodelist = [] + if self.cond is not None: nodelist.append(("cond", self.cond)) + if self.stmt is not None: nodelist.append(("stmt", self.stmt)) + return tuple(nodelist) + + def __iter__(self): + if self.cond is not None: + yield self.cond + if self.stmt is not None: + yield self.stmt + + attr_names = () + +class Pragma(Node): + __slots__ = ('string', 'coord', '__weakref__') + def __init__(self, string, coord=None): + self.string = string + self.coord = coord + + def children(self): + nodelist = [] + return tuple(nodelist) + + def __iter__(self): + return + yield + + attr_names = ('string', ) + diff --git a/templates/skills/file_manager/dependencies/pycparser/c_generator.py b/templates/skills/file_manager/dependencies/pycparser/c_generator.py new file mode 100644 index 00000000..1057b2c6 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/c_generator.py @@ -0,0 +1,502 @@ +#------------------------------------------------------------------------------ +# pycparser: c_generator.py +# +# C code generator from pycparser AST nodes. +# +# Eli Bendersky [https://eli.thegreenplace.net/] +# License: BSD +#------------------------------------------------------------------------------ +from . import c_ast + + +class CGenerator(object): + """ Uses the same visitor pattern as c_ast.NodeVisitor, but modified to + return a value from each visit method, using string accumulation in + generic_visit. + """ + def __init__(self, reduce_parentheses=False): + """ Constructs C-code generator + + reduce_parentheses: + if True, eliminates needless parentheses on binary operators + """ + # Statements start with indentation of self.indent_level spaces, using + # the _make_indent method. + self.indent_level = 0 + self.reduce_parentheses = reduce_parentheses + + def _make_indent(self): + return ' ' * self.indent_level + + def visit(self, node): + method = 'visit_' + node.__class__.__name__ + return getattr(self, method, self.generic_visit)(node) + + def generic_visit(self, node): + if node is None: + return '' + else: + return ''.join(self.visit(c) for c_name, c in node.children()) + + def visit_Constant(self, n): + return n.value + + def visit_ID(self, n): + return n.name + + def visit_Pragma(self, n): + ret = '#pragma' + if n.string: + ret += ' ' + n.string + return ret + + def visit_ArrayRef(self, n): + arrref = self._parenthesize_unless_simple(n.name) + return arrref + '[' + self.visit(n.subscript) + ']' + + def visit_StructRef(self, n): + sref = self._parenthesize_unless_simple(n.name) + return sref + n.type + self.visit(n.field) + + def visit_FuncCall(self, n): + fref = self._parenthesize_unless_simple(n.name) + return fref + '(' + self.visit(n.args) + ')' + + def visit_UnaryOp(self, n): + if n.op == 'sizeof': + # Always parenthesize the argument of sizeof since it can be + # a name. + return 'sizeof(%s)' % self.visit(n.expr) + else: + operand = self._parenthesize_unless_simple(n.expr) + if n.op == 'p++': + return '%s++' % operand + elif n.op == 'p--': + return '%s--' % operand + else: + return '%s%s' % (n.op, operand) + + # Precedence map of binary operators: + precedence_map = { + # Should be in sync with c_parser.CParser.precedence + # Higher numbers are stronger binding + '||': 0, # weakest binding + '&&': 1, + '|': 2, + '^': 3, + '&': 4, + '==': 5, '!=': 5, + '>': 6, '>=': 6, '<': 6, '<=': 6, + '>>': 7, '<<': 7, + '+': 8, '-': 8, + '*': 9, '/': 9, '%': 9 # strongest binding + } + + def visit_BinaryOp(self, n): + # Note: all binary operators are left-to-right associative + # + # If `n.left.op` has a stronger or equally binding precedence in + # comparison to `n.op`, no parenthesis are needed for the left: + # e.g., `(a*b) + c` is equivalent to `a*b + c`, as well as + # `(a+b) - c` is equivalent to `a+b - c` (same precedence). + # If the left operator is weaker binding than the current, then + # parentheses are necessary: + # e.g., `(a+b) * c` is NOT equivalent to `a+b * c`. + lval_str = self._parenthesize_if( + n.left, + lambda d: not (self._is_simple_node(d) or + self.reduce_parentheses and isinstance(d, c_ast.BinaryOp) and + self.precedence_map[d.op] >= self.precedence_map[n.op])) + # If `n.right.op` has a stronger -but not equal- binding precedence, + # parenthesis can be omitted on the right: + # e.g., `a + (b*c)` is equivalent to `a + b*c`. + # If the right operator is weaker or equally binding, then parentheses + # are necessary: + # e.g., `a * (b+c)` is NOT equivalent to `a * b+c` and + # `a - (b+c)` is NOT equivalent to `a - b+c` (same precedence). + rval_str = self._parenthesize_if( + n.right, + lambda d: not (self._is_simple_node(d) or + self.reduce_parentheses and isinstance(d, c_ast.BinaryOp) and + self.precedence_map[d.op] > self.precedence_map[n.op])) + return '%s %s %s' % (lval_str, n.op, rval_str) + + def visit_Assignment(self, n): + rval_str = self._parenthesize_if( + n.rvalue, + lambda n: isinstance(n, c_ast.Assignment)) + return '%s %s %s' % (self.visit(n.lvalue), n.op, rval_str) + + def visit_IdentifierType(self, n): + return ' '.join(n.names) + + def _visit_expr(self, n): + if isinstance(n, c_ast.InitList): + return '{' + self.visit(n) + '}' + elif isinstance(n, c_ast.ExprList): + return '(' + self.visit(n) + ')' + else: + return self.visit(n) + + def visit_Decl(self, n, no_type=False): + # no_type is used when a Decl is part of a DeclList, where the type is + # explicitly only for the first declaration in a list. + # + s = n.name if no_type else self._generate_decl(n) + if n.bitsize: s += ' : ' + self.visit(n.bitsize) + if n.init: + s += ' = ' + self._visit_expr(n.init) + return s + + def visit_DeclList(self, n): + s = self.visit(n.decls[0]) + if len(n.decls) > 1: + s += ', ' + ', '.join(self.visit_Decl(decl, no_type=True) + for decl in n.decls[1:]) + return s + + def visit_Typedef(self, n): + s = '' + if n.storage: s += ' '.join(n.storage) + ' ' + s += self._generate_type(n.type) + return s + + def visit_Cast(self, n): + s = '(' + self._generate_type(n.to_type, emit_declname=False) + ')' + return s + ' ' + self._parenthesize_unless_simple(n.expr) + + def visit_ExprList(self, n): + visited_subexprs = [] + for expr in n.exprs: + visited_subexprs.append(self._visit_expr(expr)) + return ', '.join(visited_subexprs) + + def visit_InitList(self, n): + visited_subexprs = [] + for expr in n.exprs: + visited_subexprs.append(self._visit_expr(expr)) + return ', '.join(visited_subexprs) + + def visit_Enum(self, n): + return self._generate_struct_union_enum(n, name='enum') + + def visit_Alignas(self, n): + return '_Alignas({})'.format(self.visit(n.alignment)) + + def visit_Enumerator(self, n): + if not n.value: + return '{indent}{name},\n'.format( + indent=self._make_indent(), + name=n.name, + ) + else: + return '{indent}{name} = {value},\n'.format( + indent=self._make_indent(), + name=n.name, + value=self.visit(n.value), + ) + + def visit_FuncDef(self, n): + decl = self.visit(n.decl) + self.indent_level = 0 + body = self.visit(n.body) + if n.param_decls: + knrdecls = ';\n'.join(self.visit(p) for p in n.param_decls) + return decl + '\n' + knrdecls + ';\n' + body + '\n' + else: + return decl + '\n' + body + '\n' + + def visit_FileAST(self, n): + s = '' + for ext in n.ext: + if isinstance(ext, c_ast.FuncDef): + s += self.visit(ext) + elif isinstance(ext, c_ast.Pragma): + s += self.visit(ext) + '\n' + else: + s += self.visit(ext) + ';\n' + return s + + def visit_Compound(self, n): + s = self._make_indent() + '{\n' + self.indent_level += 2 + if n.block_items: + s += ''.join(self._generate_stmt(stmt) for stmt in n.block_items) + self.indent_level -= 2 + s += self._make_indent() + '}\n' + return s + + def visit_CompoundLiteral(self, n): + return '(' + self.visit(n.type) + '){' + self.visit(n.init) + '}' + + + def visit_EmptyStatement(self, n): + return ';' + + def visit_ParamList(self, n): + return ', '.join(self.visit(param) for param in n.params) + + def visit_Return(self, n): + s = 'return' + if n.expr: s += ' ' + self.visit(n.expr) + return s + ';' + + def visit_Break(self, n): + return 'break;' + + def visit_Continue(self, n): + return 'continue;' + + def visit_TernaryOp(self, n): + s = '(' + self._visit_expr(n.cond) + ') ? ' + s += '(' + self._visit_expr(n.iftrue) + ') : ' + s += '(' + self._visit_expr(n.iffalse) + ')' + return s + + def visit_If(self, n): + s = 'if (' + if n.cond: s += self.visit(n.cond) + s += ')\n' + s += self._generate_stmt(n.iftrue, add_indent=True) + if n.iffalse: + s += self._make_indent() + 'else\n' + s += self._generate_stmt(n.iffalse, add_indent=True) + return s + + def visit_For(self, n): + s = 'for (' + if n.init: s += self.visit(n.init) + s += ';' + if n.cond: s += ' ' + self.visit(n.cond) + s += ';' + if n.next: s += ' ' + self.visit(n.next) + s += ')\n' + s += self._generate_stmt(n.stmt, add_indent=True) + return s + + def visit_While(self, n): + s = 'while (' + if n.cond: s += self.visit(n.cond) + s += ')\n' + s += self._generate_stmt(n.stmt, add_indent=True) + return s + + def visit_DoWhile(self, n): + s = 'do\n' + s += self._generate_stmt(n.stmt, add_indent=True) + s += self._make_indent() + 'while (' + if n.cond: s += self.visit(n.cond) + s += ');' + return s + + def visit_StaticAssert(self, n): + s = '_Static_assert(' + s += self.visit(n.cond) + if n.message: + s += ',' + s += self.visit(n.message) + s += ')' + return s + + def visit_Switch(self, n): + s = 'switch (' + self.visit(n.cond) + ')\n' + s += self._generate_stmt(n.stmt, add_indent=True) + return s + + def visit_Case(self, n): + s = 'case ' + self.visit(n.expr) + ':\n' + for stmt in n.stmts: + s += self._generate_stmt(stmt, add_indent=True) + return s + + def visit_Default(self, n): + s = 'default:\n' + for stmt in n.stmts: + s += self._generate_stmt(stmt, add_indent=True) + return s + + def visit_Label(self, n): + return n.name + ':\n' + self._generate_stmt(n.stmt) + + def visit_Goto(self, n): + return 'goto ' + n.name + ';' + + def visit_EllipsisParam(self, n): + return '...' + + def visit_Struct(self, n): + return self._generate_struct_union_enum(n, 'struct') + + def visit_Typename(self, n): + return self._generate_type(n.type) + + def visit_Union(self, n): + return self._generate_struct_union_enum(n, 'union') + + def visit_NamedInitializer(self, n): + s = '' + for name in n.name: + if isinstance(name, c_ast.ID): + s += '.' + name.name + else: + s += '[' + self.visit(name) + ']' + s += ' = ' + self._visit_expr(n.expr) + return s + + def visit_FuncDecl(self, n): + return self._generate_type(n) + + def visit_ArrayDecl(self, n): + return self._generate_type(n, emit_declname=False) + + def visit_TypeDecl(self, n): + return self._generate_type(n, emit_declname=False) + + def visit_PtrDecl(self, n): + return self._generate_type(n, emit_declname=False) + + def _generate_struct_union_enum(self, n, name): + """ Generates code for structs, unions, and enums. name should be + 'struct', 'union', or 'enum'. + """ + if name in ('struct', 'union'): + members = n.decls + body_function = self._generate_struct_union_body + else: + assert name == 'enum' + members = None if n.values is None else n.values.enumerators + body_function = self._generate_enum_body + s = name + ' ' + (n.name or '') + if members is not None: + # None means no members + # Empty sequence means an empty list of members + s += '\n' + s += self._make_indent() + self.indent_level += 2 + s += '{\n' + s += body_function(members) + self.indent_level -= 2 + s += self._make_indent() + '}' + return s + + def _generate_struct_union_body(self, members): + return ''.join(self._generate_stmt(decl) for decl in members) + + def _generate_enum_body(self, members): + # `[:-2] + '\n'` removes the final `,` from the enumerator list + return ''.join(self.visit(value) for value in members)[:-2] + '\n' + + def _generate_stmt(self, n, add_indent=False): + """ Generation from a statement node. This method exists as a wrapper + for individual visit_* methods to handle different treatment of + some statements in this context. + """ + typ = type(n) + if add_indent: self.indent_level += 2 + indent = self._make_indent() + if add_indent: self.indent_level -= 2 + + if typ in ( + c_ast.Decl, c_ast.Assignment, c_ast.Cast, c_ast.UnaryOp, + c_ast.BinaryOp, c_ast.TernaryOp, c_ast.FuncCall, c_ast.ArrayRef, + c_ast.StructRef, c_ast.Constant, c_ast.ID, c_ast.Typedef, + c_ast.ExprList): + # These can also appear in an expression context so no semicolon + # is added to them automatically + # + return indent + self.visit(n) + ';\n' + elif typ in (c_ast.Compound,): + # No extra indentation required before the opening brace of a + # compound - because it consists of multiple lines it has to + # compute its own indentation. + # + return self.visit(n) + elif typ in (c_ast.If,): + return indent + self.visit(n) + else: + return indent + self.visit(n) + '\n' + + def _generate_decl(self, n): + """ Generation from a Decl node. + """ + s = '' + if n.funcspec: s = ' '.join(n.funcspec) + ' ' + if n.storage: s += ' '.join(n.storage) + ' ' + if n.align: s += self.visit(n.align[0]) + ' ' + s += self._generate_type(n.type) + return s + + def _generate_type(self, n, modifiers=[], emit_declname = True): + """ Recursive generation from a type node. n is the type node. + modifiers collects the PtrDecl, ArrayDecl and FuncDecl modifiers + encountered on the way down to a TypeDecl, to allow proper + generation from it. + """ + typ = type(n) + #~ print(n, modifiers) + + if typ == c_ast.TypeDecl: + s = '' + if n.quals: s += ' '.join(n.quals) + ' ' + s += self.visit(n.type) + + nstr = n.declname if n.declname and emit_declname else '' + # Resolve modifiers. + # Wrap in parens to distinguish pointer to array and pointer to + # function syntax. + # + for i, modifier in enumerate(modifiers): + if isinstance(modifier, c_ast.ArrayDecl): + if (i != 0 and + isinstance(modifiers[i - 1], c_ast.PtrDecl)): + nstr = '(' + nstr + ')' + nstr += '[' + if modifier.dim_quals: + nstr += ' '.join(modifier.dim_quals) + ' ' + nstr += self.visit(modifier.dim) + ']' + elif isinstance(modifier, c_ast.FuncDecl): + if (i != 0 and + isinstance(modifiers[i - 1], c_ast.PtrDecl)): + nstr = '(' + nstr + ')' + nstr += '(' + self.visit(modifier.args) + ')' + elif isinstance(modifier, c_ast.PtrDecl): + if modifier.quals: + nstr = '* %s%s' % (' '.join(modifier.quals), + ' ' + nstr if nstr else '') + else: + nstr = '*' + nstr + if nstr: s += ' ' + nstr + return s + elif typ == c_ast.Decl: + return self._generate_decl(n.type) + elif typ == c_ast.Typename: + return self._generate_type(n.type, emit_declname = emit_declname) + elif typ == c_ast.IdentifierType: + return ' '.join(n.names) + ' ' + elif typ in (c_ast.ArrayDecl, c_ast.PtrDecl, c_ast.FuncDecl): + return self._generate_type(n.type, modifiers + [n], + emit_declname = emit_declname) + else: + return self.visit(n) + + def _parenthesize_if(self, n, condition): + """ Visits 'n' and returns its string representation, parenthesized + if the condition function applied to the node returns True. + """ + s = self._visit_expr(n) + if condition(n): + return '(' + s + ')' + else: + return s + + def _parenthesize_unless_simple(self, n): + """ Common use case for _parenthesize_if + """ + return self._parenthesize_if(n, lambda d: not self._is_simple_node(d)) + + def _is_simple_node(self, n): + """ Returns True for nodes that are "simple" - i.e. nodes that always + have higher precedence than operators. + """ + return isinstance(n, (c_ast.Constant, c_ast.ID, c_ast.ArrayRef, + c_ast.StructRef, c_ast.FuncCall)) diff --git a/templates/skills/file_manager/dependencies/pycparser/c_lexer.py b/templates/skills/file_manager/dependencies/pycparser/c_lexer.py new file mode 100644 index 00000000..22c64bc7 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/c_lexer.py @@ -0,0 +1,555 @@ +#------------------------------------------------------------------------------ +# pycparser: c_lexer.py +# +# CLexer class: lexer for the C language +# +# Eli Bendersky [https://eli.thegreenplace.net/] +# License: BSD +#------------------------------------------------------------------------------ +import re + +from .ply import lex +from .ply.lex import TOKEN + + +class CLexer(object): + """ A lexer for the C language. After building it, set the + input text with input(), and call token() to get new + tokens. + + The public attribute filename can be set to an initial + filename, but the lexer will update it upon #line + directives. + """ + def __init__(self, error_func, on_lbrace_func, on_rbrace_func, + type_lookup_func): + """ Create a new Lexer. + + error_func: + An error function. Will be called with an error + message, line and column as arguments, in case of + an error during lexing. + + on_lbrace_func, on_rbrace_func: + Called when an LBRACE or RBRACE is encountered + (likely to push/pop type_lookup_func's scope) + + type_lookup_func: + A type lookup function. Given a string, it must + return True IFF this string is a name of a type + that was defined with a typedef earlier. + """ + self.error_func = error_func + self.on_lbrace_func = on_lbrace_func + self.on_rbrace_func = on_rbrace_func + self.type_lookup_func = type_lookup_func + self.filename = '' + + # Keeps track of the last token returned from self.token() + self.last_token = None + + # Allow either "# line" or "# " to support GCC's + # cpp output + # + self.line_pattern = re.compile(r'([ \t]*line\W)|([ \t]*\d+)') + self.pragma_pattern = re.compile(r'[ \t]*pragma\W') + + def build(self, **kwargs): + """ Builds the lexer from the specification. Must be + called after the lexer object is created. + + This method exists separately, because the PLY + manual warns against calling lex.lex inside + __init__ + """ + self.lexer = lex.lex(object=self, **kwargs) + + def reset_lineno(self): + """ Resets the internal line number counter of the lexer. + """ + self.lexer.lineno = 1 + + def input(self, text): + self.lexer.input(text) + + def token(self): + self.last_token = self.lexer.token() + return self.last_token + + def find_tok_column(self, token): + """ Find the column of the token in its line. + """ + last_cr = self.lexer.lexdata.rfind('\n', 0, token.lexpos) + return token.lexpos - last_cr + + ######################-- PRIVATE --###################### + + ## + ## Internal auxiliary methods + ## + def _error(self, msg, token): + location = self._make_tok_location(token) + self.error_func(msg, location[0], location[1]) + self.lexer.skip(1) + + def _make_tok_location(self, token): + return (token.lineno, self.find_tok_column(token)) + + ## + ## Reserved keywords + ## + keywords = ( + 'AUTO', 'BREAK', 'CASE', 'CHAR', 'CONST', + 'CONTINUE', 'DEFAULT', 'DO', 'DOUBLE', 'ELSE', 'ENUM', 'EXTERN', + 'FLOAT', 'FOR', 'GOTO', 'IF', 'INLINE', 'INT', 'LONG', + 'REGISTER', 'OFFSETOF', + 'RESTRICT', 'RETURN', 'SHORT', 'SIGNED', 'SIZEOF', 'STATIC', 'STRUCT', + 'SWITCH', 'TYPEDEF', 'UNION', 'UNSIGNED', 'VOID', + 'VOLATILE', 'WHILE', '__INT128', + ) + + keywords_new = ( + '_BOOL', '_COMPLEX', + '_NORETURN', '_THREAD_LOCAL', '_STATIC_ASSERT', + '_ATOMIC', '_ALIGNOF', '_ALIGNAS', + '_PRAGMA', + ) + + keyword_map = {} + + for keyword in keywords: + keyword_map[keyword.lower()] = keyword + + for keyword in keywords_new: + keyword_map[keyword[:2].upper() + keyword[2:].lower()] = keyword + + ## + ## All the tokens recognized by the lexer + ## + tokens = keywords + keywords_new + ( + # Identifiers + 'ID', + + # Type identifiers (identifiers previously defined as + # types with typedef) + 'TYPEID', + + # constants + 'INT_CONST_DEC', 'INT_CONST_OCT', 'INT_CONST_HEX', 'INT_CONST_BIN', 'INT_CONST_CHAR', + 'FLOAT_CONST', 'HEX_FLOAT_CONST', + 'CHAR_CONST', + 'WCHAR_CONST', + 'U8CHAR_CONST', + 'U16CHAR_CONST', + 'U32CHAR_CONST', + + # String literals + 'STRING_LITERAL', + 'WSTRING_LITERAL', + 'U8STRING_LITERAL', + 'U16STRING_LITERAL', + 'U32STRING_LITERAL', + + # Operators + 'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'MOD', + 'OR', 'AND', 'NOT', 'XOR', 'LSHIFT', 'RSHIFT', + 'LOR', 'LAND', 'LNOT', + 'LT', 'LE', 'GT', 'GE', 'EQ', 'NE', + + # Assignment + 'EQUALS', 'TIMESEQUAL', 'DIVEQUAL', 'MODEQUAL', + 'PLUSEQUAL', 'MINUSEQUAL', + 'LSHIFTEQUAL','RSHIFTEQUAL', 'ANDEQUAL', 'XOREQUAL', + 'OREQUAL', + + # Increment/decrement + 'PLUSPLUS', 'MINUSMINUS', + + # Structure dereference (->) + 'ARROW', + + # Conditional operator (?) + 'CONDOP', + + # Delimiters + 'LPAREN', 'RPAREN', # ( ) + 'LBRACKET', 'RBRACKET', # [ ] + 'LBRACE', 'RBRACE', # { } + 'COMMA', 'PERIOD', # . , + 'SEMI', 'COLON', # ; : + + # Ellipsis (...) + 'ELLIPSIS', + + # pre-processor + 'PPHASH', # '#' + 'PPPRAGMA', # 'pragma' + 'PPPRAGMASTR', + ) + + ## + ## Regexes for use in tokens + ## + ## + + # valid C identifiers (K&R2: A.2.3), plus '$' (supported by some compilers) + identifier = r'[a-zA-Z_$][0-9a-zA-Z_$]*' + + hex_prefix = '0[xX]' + hex_digits = '[0-9a-fA-F]+' + bin_prefix = '0[bB]' + bin_digits = '[01]+' + + # integer constants (K&R2: A.2.5.1) + integer_suffix_opt = r'(([uU]ll)|([uU]LL)|(ll[uU]?)|(LL[uU]?)|([uU][lL])|([lL][uU]?)|[uU])?' + decimal_constant = '(0'+integer_suffix_opt+')|([1-9][0-9]*'+integer_suffix_opt+')' + octal_constant = '0[0-7]*'+integer_suffix_opt + hex_constant = hex_prefix+hex_digits+integer_suffix_opt + bin_constant = bin_prefix+bin_digits+integer_suffix_opt + + bad_octal_constant = '0[0-7]*[89]' + + # character constants (K&R2: A.2.5.2) + # Note: a-zA-Z and '.-~^_!=&;,' are allowed as escape chars to support #line + # directives with Windows paths as filenames (..\..\dir\file) + # For the same reason, decimal_escape allows all digit sequences. We want to + # parse all correct code, even if it means to sometimes parse incorrect + # code. + # + # The original regexes were taken verbatim from the C syntax definition, + # and were later modified to avoid worst-case exponential running time. + # + # simple_escape = r"""([a-zA-Z._~!=&\^\-\\?'"])""" + # decimal_escape = r"""(\d+)""" + # hex_escape = r"""(x[0-9a-fA-F]+)""" + # bad_escape = r"""([\\][^a-zA-Z._~^!=&\^\-\\?'"x0-7])""" + # + # The following modifications were made to avoid the ambiguity that allowed backtracking: + # (https://github.com/eliben/pycparser/issues/61) + # + # - \x was removed from simple_escape, unless it was not followed by a hex digit, to avoid ambiguity with hex_escape. + # - hex_escape allows one or more hex characters, but requires that the next character(if any) is not hex + # - decimal_escape allows one or more decimal characters, but requires that the next character(if any) is not a decimal + # - bad_escape does not allow any decimals (8-9), to avoid conflicting with the permissive decimal_escape. + # + # Without this change, python's `re` module would recursively try parsing each ambiguous escape sequence in multiple ways. + # e.g. `\123` could be parsed as `\1`+`23`, `\12`+`3`, and `\123`. + + simple_escape = r"""([a-wyzA-Z._~!=&\^\-\\?'"]|x(?![0-9a-fA-F]))""" + decimal_escape = r"""(\d+)(?!\d)""" + hex_escape = r"""(x[0-9a-fA-F]+)(?![0-9a-fA-F])""" + bad_escape = r"""([\\][^a-zA-Z._~^!=&\^\-\\?'"x0-9])""" + + escape_sequence = r"""(\\("""+simple_escape+'|'+decimal_escape+'|'+hex_escape+'))' + + # This complicated regex with lookahead might be slow for strings, so because all of the valid escapes (including \x) allowed + # 0 or more non-escaped characters after the first character, simple_escape+decimal_escape+hex_escape got simplified to + + escape_sequence_start_in_string = r"""(\\[0-9a-zA-Z._~!=&\^\-\\?'"])""" + + cconst_char = r"""([^'\\\n]|"""+escape_sequence+')' + char_const = "'"+cconst_char+"'" + wchar_const = 'L'+char_const + u8char_const = 'u8'+char_const + u16char_const = 'u'+char_const + u32char_const = 'U'+char_const + multicharacter_constant = "'"+cconst_char+"{2,4}'" + unmatched_quote = "('"+cconst_char+"*\\n)|('"+cconst_char+"*$)" + bad_char_const = r"""('"""+cconst_char+"""[^'\n]+')|('')|('"""+bad_escape+r"""[^'\n]*')""" + + # string literals (K&R2: A.2.6) + string_char = r"""([^"\\\n]|"""+escape_sequence_start_in_string+')' + string_literal = '"'+string_char+'*"' + wstring_literal = 'L'+string_literal + u8string_literal = 'u8'+string_literal + u16string_literal = 'u'+string_literal + u32string_literal = 'U'+string_literal + bad_string_literal = '"'+string_char+'*'+bad_escape+string_char+'*"' + + # floating constants (K&R2: A.2.5.3) + exponent_part = r"""([eE][-+]?[0-9]+)""" + fractional_constant = r"""([0-9]*\.[0-9]+)|([0-9]+\.)""" + floating_constant = '(((('+fractional_constant+')'+exponent_part+'?)|([0-9]+'+exponent_part+'))[FfLl]?)' + binary_exponent_part = r'''([pP][+-]?[0-9]+)''' + hex_fractional_constant = '((('+hex_digits+r""")?\."""+hex_digits+')|('+hex_digits+r"""\.))""" + hex_floating_constant = '('+hex_prefix+'('+hex_digits+'|'+hex_fractional_constant+')'+binary_exponent_part+'[FfLl]?)' + + ## + ## Lexer states: used for preprocessor \n-terminated directives + ## + states = ( + # ppline: preprocessor line directives + # + ('ppline', 'exclusive'), + + # pppragma: pragma + # + ('pppragma', 'exclusive'), + ) + + def t_PPHASH(self, t): + r'[ \t]*\#' + if self.line_pattern.match(t.lexer.lexdata, pos=t.lexer.lexpos): + t.lexer.begin('ppline') + self.pp_line = self.pp_filename = None + elif self.pragma_pattern.match(t.lexer.lexdata, pos=t.lexer.lexpos): + t.lexer.begin('pppragma') + else: + t.type = 'PPHASH' + return t + + ## + ## Rules for the ppline state + ## + @TOKEN(string_literal) + def t_ppline_FILENAME(self, t): + if self.pp_line is None: + self._error('filename before line number in #line', t) + else: + self.pp_filename = t.value.lstrip('"').rstrip('"') + + @TOKEN(decimal_constant) + def t_ppline_LINE_NUMBER(self, t): + if self.pp_line is None: + self.pp_line = t.value + else: + # Ignore: GCC's cpp sometimes inserts a numeric flag + # after the file name + pass + + def t_ppline_NEWLINE(self, t): + r'\n' + if self.pp_line is None: + self._error('line number missing in #line', t) + else: + self.lexer.lineno = int(self.pp_line) + + if self.pp_filename is not None: + self.filename = self.pp_filename + + t.lexer.begin('INITIAL') + + def t_ppline_PPLINE(self, t): + r'line' + pass + + t_ppline_ignore = ' \t' + + def t_ppline_error(self, t): + self._error('invalid #line directive', t) + + ## + ## Rules for the pppragma state + ## + def t_pppragma_NEWLINE(self, t): + r'\n' + t.lexer.lineno += 1 + t.lexer.begin('INITIAL') + + def t_pppragma_PPPRAGMA(self, t): + r'pragma' + return t + + t_pppragma_ignore = ' \t' + + def t_pppragma_STR(self, t): + '.+' + t.type = 'PPPRAGMASTR' + return t + + def t_pppragma_error(self, t): + self._error('invalid #pragma directive', t) + + ## + ## Rules for the normal state + ## + t_ignore = ' \t' + + # Newlines + def t_NEWLINE(self, t): + r'\n+' + t.lexer.lineno += t.value.count("\n") + + # Operators + t_PLUS = r'\+' + t_MINUS = r'-' + t_TIMES = r'\*' + t_DIVIDE = r'/' + t_MOD = r'%' + t_OR = r'\|' + t_AND = r'&' + t_NOT = r'~' + t_XOR = r'\^' + t_LSHIFT = r'<<' + t_RSHIFT = r'>>' + t_LOR = r'\|\|' + t_LAND = r'&&' + t_LNOT = r'!' + t_LT = r'<' + t_GT = r'>' + t_LE = r'<=' + t_GE = r'>=' + t_EQ = r'==' + t_NE = r'!=' + + # Assignment operators + t_EQUALS = r'=' + t_TIMESEQUAL = r'\*=' + t_DIVEQUAL = r'/=' + t_MODEQUAL = r'%=' + t_PLUSEQUAL = r'\+=' + t_MINUSEQUAL = r'-=' + t_LSHIFTEQUAL = r'<<=' + t_RSHIFTEQUAL = r'>>=' + t_ANDEQUAL = r'&=' + t_OREQUAL = r'\|=' + t_XOREQUAL = r'\^=' + + # Increment/decrement + t_PLUSPLUS = r'\+\+' + t_MINUSMINUS = r'--' + + # -> + t_ARROW = r'->' + + # ? + t_CONDOP = r'\?' + + # Delimiters + t_LPAREN = r'\(' + t_RPAREN = r'\)' + t_LBRACKET = r'\[' + t_RBRACKET = r'\]' + t_COMMA = r',' + t_PERIOD = r'\.' + t_SEMI = r';' + t_COLON = r':' + t_ELLIPSIS = r'\.\.\.' + + # Scope delimiters + # To see why on_lbrace_func is needed, consider: + # typedef char TT; + # void foo(int TT) { TT = 10; } + # TT x = 5; + # Outside the function, TT is a typedef, but inside (starting and ending + # with the braces) it's a parameter. The trouble begins with yacc's + # lookahead token. If we open a new scope in brace_open, then TT has + # already been read and incorrectly interpreted as TYPEID. So, we need + # to open and close scopes from within the lexer. + # Similar for the TT immediately outside the end of the function. + # + @TOKEN(r'\{') + def t_LBRACE(self, t): + self.on_lbrace_func() + return t + @TOKEN(r'\}') + def t_RBRACE(self, t): + self.on_rbrace_func() + return t + + t_STRING_LITERAL = string_literal + + # The following floating and integer constants are defined as + # functions to impose a strict order (otherwise, decimal + # is placed before the others because its regex is longer, + # and this is bad) + # + @TOKEN(floating_constant) + def t_FLOAT_CONST(self, t): + return t + + @TOKEN(hex_floating_constant) + def t_HEX_FLOAT_CONST(self, t): + return t + + @TOKEN(hex_constant) + def t_INT_CONST_HEX(self, t): + return t + + @TOKEN(bin_constant) + def t_INT_CONST_BIN(self, t): + return t + + @TOKEN(bad_octal_constant) + def t_BAD_CONST_OCT(self, t): + msg = "Invalid octal constant" + self._error(msg, t) + + @TOKEN(octal_constant) + def t_INT_CONST_OCT(self, t): + return t + + @TOKEN(decimal_constant) + def t_INT_CONST_DEC(self, t): + return t + + # Must come before bad_char_const, to prevent it from + # catching valid char constants as invalid + # + @TOKEN(multicharacter_constant) + def t_INT_CONST_CHAR(self, t): + return t + + @TOKEN(char_const) + def t_CHAR_CONST(self, t): + return t + + @TOKEN(wchar_const) + def t_WCHAR_CONST(self, t): + return t + + @TOKEN(u8char_const) + def t_U8CHAR_CONST(self, t): + return t + + @TOKEN(u16char_const) + def t_U16CHAR_CONST(self, t): + return t + + @TOKEN(u32char_const) + def t_U32CHAR_CONST(self, t): + return t + + @TOKEN(unmatched_quote) + def t_UNMATCHED_QUOTE(self, t): + msg = "Unmatched '" + self._error(msg, t) + + @TOKEN(bad_char_const) + def t_BAD_CHAR_CONST(self, t): + msg = "Invalid char constant %s" % t.value + self._error(msg, t) + + @TOKEN(wstring_literal) + def t_WSTRING_LITERAL(self, t): + return t + + @TOKEN(u8string_literal) + def t_U8STRING_LITERAL(self, t): + return t + + @TOKEN(u16string_literal) + def t_U16STRING_LITERAL(self, t): + return t + + @TOKEN(u32string_literal) + def t_U32STRING_LITERAL(self, t): + return t + + # unmatched string literals are caught by the preprocessor + + @TOKEN(bad_string_literal) + def t_BAD_STRING_LITERAL(self, t): + msg = "String contains invalid escape code" + self._error(msg, t) + + @TOKEN(identifier) + def t_ID(self, t): + t.type = self.keyword_map.get(t.value, "ID") + if t.type == 'ID' and self.type_lookup_func(t.value): + t.type = "TYPEID" + return t + + def t_error(self, t): + msg = 'Illegal character %s' % repr(t.value[0]) + self._error(msg, t) diff --git a/templates/skills/file_manager/dependencies/pycparser/c_parser.py b/templates/skills/file_manager/dependencies/pycparser/c_parser.py new file mode 100644 index 00000000..d31574a5 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/c_parser.py @@ -0,0 +1,1950 @@ +#------------------------------------------------------------------------------ +# pycparser: c_parser.py +# +# CParser class: Parser and AST builder for the C language +# +# Eli Bendersky [https://eli.thegreenplace.net/] +# License: BSD +#------------------------------------------------------------------------------ +from .ply import yacc + +from . import c_ast +from .c_lexer import CLexer +from .plyparser import PLYParser, ParseError, parameterized, template +from .ast_transforms import fix_switch_cases, fix_atomic_specifiers + + +@template +class CParser(PLYParser): + def __init__( + self, + lex_optimize=True, + lexer=CLexer, + lextab='pycparser.lextab', + yacc_optimize=True, + yacctab='pycparser.yacctab', + yacc_debug=False, + taboutputdir=''): + """ Create a new CParser. + + Some arguments for controlling the debug/optimization + level of the parser are provided. The defaults are + tuned for release/performance mode. + The simple rules for using them are: + *) When tweaking CParser/CLexer, set these to False + *) When releasing a stable parser, set to True + + lex_optimize: + Set to False when you're modifying the lexer. + Otherwise, changes in the lexer won't be used, if + some lextab.py file exists. + When releasing with a stable lexer, set to True + to save the re-generation of the lexer table on + each run. + + lexer: + Set this parameter to define the lexer to use if + you're not using the default CLexer. + + lextab: + Points to the lex table that's used for optimized + mode. Only if you're modifying the lexer and want + some tests to avoid re-generating the table, make + this point to a local lex table file (that's been + earlier generated with lex_optimize=True) + + yacc_optimize: + Set to False when you're modifying the parser. + Otherwise, changes in the parser won't be used, if + some parsetab.py file exists. + When releasing with a stable parser, set to True + to save the re-generation of the parser table on + each run. + + yacctab: + Points to the yacc table that's used for optimized + mode. Only if you're modifying the parser, make + this point to a local yacc table file + + yacc_debug: + Generate a parser.out file that explains how yacc + built the parsing table from the grammar. + + taboutputdir: + Set this parameter to control the location of generated + lextab and yacctab files. + """ + self.clex = lexer( + error_func=self._lex_error_func, + on_lbrace_func=self._lex_on_lbrace_func, + on_rbrace_func=self._lex_on_rbrace_func, + type_lookup_func=self._lex_type_lookup_func) + + self.clex.build( + optimize=lex_optimize, + lextab=lextab, + outputdir=taboutputdir) + self.tokens = self.clex.tokens + + rules_with_opt = [ + 'abstract_declarator', + 'assignment_expression', + 'declaration_list', + 'declaration_specifiers_no_type', + 'designation', + 'expression', + 'identifier_list', + 'init_declarator_list', + 'id_init_declarator_list', + 'initializer_list', + 'parameter_type_list', + 'block_item_list', + 'type_qualifier_list', + 'struct_declarator_list' + ] + + for rule in rules_with_opt: + self._create_opt_rule(rule) + + self.cparser = yacc.yacc( + module=self, + start='translation_unit_or_empty', + debug=yacc_debug, + optimize=yacc_optimize, + tabmodule=yacctab, + outputdir=taboutputdir) + + # Stack of scopes for keeping track of symbols. _scope_stack[-1] is + # the current (topmost) scope. Each scope is a dictionary that + # specifies whether a name is a type. If _scope_stack[n][name] is + # True, 'name' is currently a type in the scope. If it's False, + # 'name' is used in the scope but not as a type (for instance, if we + # saw: int name; + # If 'name' is not a key in _scope_stack[n] then 'name' was not defined + # in this scope at all. + self._scope_stack = [dict()] + + # Keeps track of the last token given to yacc (the lookahead token) + self._last_yielded_token = None + + def parse(self, text, filename='', debug=False): + """ Parses C code and returns an AST. + + text: + A string containing the C source code + + filename: + Name of the file being parsed (for meaningful + error messages) + + debug: + Debug flag to YACC + """ + self.clex.filename = filename + self.clex.reset_lineno() + self._scope_stack = [dict()] + self._last_yielded_token = None + return self.cparser.parse( + input=text, + lexer=self.clex, + debug=debug) + + ######################-- PRIVATE --###################### + + def _push_scope(self): + self._scope_stack.append(dict()) + + def _pop_scope(self): + assert len(self._scope_stack) > 1 + self._scope_stack.pop() + + def _add_typedef_name(self, name, coord): + """ Add a new typedef name (ie a TYPEID) to the current scope + """ + if not self._scope_stack[-1].get(name, True): + self._parse_error( + "Typedef %r previously declared as non-typedef " + "in this scope" % name, coord) + self._scope_stack[-1][name] = True + + def _add_identifier(self, name, coord): + """ Add a new object, function, or enum member name (ie an ID) to the + current scope + """ + if self._scope_stack[-1].get(name, False): + self._parse_error( + "Non-typedef %r previously declared as typedef " + "in this scope" % name, coord) + self._scope_stack[-1][name] = False + + def _is_type_in_scope(self, name): + """ Is *name* a typedef-name in the current scope? + """ + for scope in reversed(self._scope_stack): + # If name is an identifier in this scope it shadows typedefs in + # higher scopes. + in_scope = scope.get(name) + if in_scope is not None: return in_scope + return False + + def _lex_error_func(self, msg, line, column): + self._parse_error(msg, self._coord(line, column)) + + def _lex_on_lbrace_func(self): + self._push_scope() + + def _lex_on_rbrace_func(self): + self._pop_scope() + + def _lex_type_lookup_func(self, name): + """ Looks up types that were previously defined with + typedef. + Passed to the lexer for recognizing identifiers that + are types. + """ + is_type = self._is_type_in_scope(name) + return is_type + + def _get_yacc_lookahead_token(self): + """ We need access to yacc's lookahead token in certain cases. + This is the last token yacc requested from the lexer, so we + ask the lexer. + """ + return self.clex.last_token + + # To understand what's going on here, read sections A.8.5 and + # A.8.6 of K&R2 very carefully. + # + # A C type consists of a basic type declaration, with a list + # of modifiers. For example: + # + # int *c[5]; + # + # The basic declaration here is 'int c', and the pointer and + # the array are the modifiers. + # + # Basic declarations are represented by TypeDecl (from module c_ast) and the + # modifiers are FuncDecl, PtrDecl and ArrayDecl. + # + # The standard states that whenever a new modifier is parsed, it should be + # added to the end of the list of modifiers. For example: + # + # K&R2 A.8.6.2: Array Declarators + # + # In a declaration T D where D has the form + # D1 [constant-expression-opt] + # and the type of the identifier in the declaration T D1 is + # "type-modifier T", the type of the + # identifier of D is "type-modifier array of T" + # + # This is what this method does. The declarator it receives + # can be a list of declarators ending with TypeDecl. It + # tacks the modifier to the end of this list, just before + # the TypeDecl. + # + # Additionally, the modifier may be a list itself. This is + # useful for pointers, that can come as a chain from the rule + # p_pointer. In this case, the whole modifier list is spliced + # into the new location. + def _type_modify_decl(self, decl, modifier): + """ Tacks a type modifier on a declarator, and returns + the modified declarator. + + Note: the declarator and modifier may be modified + """ + #~ print '****' + #~ decl.show(offset=3) + #~ modifier.show(offset=3) + #~ print '****' + + modifier_head = modifier + modifier_tail = modifier + + # The modifier may be a nested list. Reach its tail. + while modifier_tail.type: + modifier_tail = modifier_tail.type + + # If the decl is a basic type, just tack the modifier onto it. + if isinstance(decl, c_ast.TypeDecl): + modifier_tail.type = decl + return modifier + else: + # Otherwise, the decl is a list of modifiers. Reach + # its tail and splice the modifier onto the tail, + # pointing to the underlying basic type. + decl_tail = decl + + while not isinstance(decl_tail.type, c_ast.TypeDecl): + decl_tail = decl_tail.type + + modifier_tail.type = decl_tail.type + decl_tail.type = modifier_head + return decl + + # Due to the order in which declarators are constructed, + # they have to be fixed in order to look like a normal AST. + # + # When a declaration arrives from syntax construction, it has + # these problems: + # * The innermost TypeDecl has no type (because the basic + # type is only known at the uppermost declaration level) + # * The declaration has no variable name, since that is saved + # in the innermost TypeDecl + # * The typename of the declaration is a list of type + # specifiers, and not a node. Here, basic identifier types + # should be separated from more complex types like enums + # and structs. + # + # This method fixes these problems. + def _fix_decl_name_type(self, decl, typename): + """ Fixes a declaration. Modifies decl. + """ + # Reach the underlying basic type + # + type = decl + while not isinstance(type, c_ast.TypeDecl): + type = type.type + + decl.name = type.declname + type.quals = decl.quals[:] + + # The typename is a list of types. If any type in this + # list isn't an IdentifierType, it must be the only + # type in the list (it's illegal to declare "int enum ..") + # If all the types are basic, they're collected in the + # IdentifierType holder. + for tn in typename: + if not isinstance(tn, c_ast.IdentifierType): + if len(typename) > 1: + self._parse_error( + "Invalid multiple types specified", tn.coord) + else: + type.type = tn + return decl + + if not typename: + # Functions default to returning int + # + if not isinstance(decl.type, c_ast.FuncDecl): + self._parse_error( + "Missing type in declaration", decl.coord) + type.type = c_ast.IdentifierType( + ['int'], + coord=decl.coord) + else: + # At this point, we know that typename is a list of IdentifierType + # nodes. Concatenate all the names into a single list. + # + type.type = c_ast.IdentifierType( + [name for id in typename for name in id.names], + coord=typename[0].coord) + return decl + + def _add_declaration_specifier(self, declspec, newspec, kind, append=False): + """ Declaration specifiers are represented by a dictionary + with the entries: + * qual: a list of type qualifiers + * storage: a list of storage type qualifiers + * type: a list of type specifiers + * function: a list of function specifiers + * alignment: a list of alignment specifiers + + This method is given a declaration specifier, and a + new specifier of a given kind. + If `append` is True, the new specifier is added to the end of + the specifiers list, otherwise it's added at the beginning. + Returns the declaration specifier, with the new + specifier incorporated. + """ + spec = declspec or dict(qual=[], storage=[], type=[], function=[], alignment=[]) + + if append: + spec[kind].append(newspec) + else: + spec[kind].insert(0, newspec) + + return spec + + def _build_declarations(self, spec, decls, typedef_namespace=False): + """ Builds a list of declarations all sharing the given specifiers. + If typedef_namespace is true, each declared name is added + to the "typedef namespace", which also includes objects, + functions, and enum constants. + """ + is_typedef = 'typedef' in spec['storage'] + declarations = [] + + # Bit-fields are allowed to be unnamed. + if decls[0].get('bitsize') is not None: + pass + + # When redeclaring typedef names as identifiers in inner scopes, a + # problem can occur where the identifier gets grouped into + # spec['type'], leaving decl as None. This can only occur for the + # first declarator. + elif decls[0]['decl'] is None: + if len(spec['type']) < 2 or len(spec['type'][-1].names) != 1 or \ + not self._is_type_in_scope(spec['type'][-1].names[0]): + coord = '?' + for t in spec['type']: + if hasattr(t, 'coord'): + coord = t.coord + break + self._parse_error('Invalid declaration', coord) + + # Make this look as if it came from "direct_declarator:ID" + decls[0]['decl'] = c_ast.TypeDecl( + declname=spec['type'][-1].names[0], + type=None, + quals=None, + align=spec['alignment'], + coord=spec['type'][-1].coord) + # Remove the "new" type's name from the end of spec['type'] + del spec['type'][-1] + + # A similar problem can occur where the declaration ends up looking + # like an abstract declarator. Give it a name if this is the case. + elif not isinstance(decls[0]['decl'], ( + c_ast.Enum, c_ast.Struct, c_ast.Union, c_ast.IdentifierType)): + decls_0_tail = decls[0]['decl'] + while not isinstance(decls_0_tail, c_ast.TypeDecl): + decls_0_tail = decls_0_tail.type + if decls_0_tail.declname is None: + decls_0_tail.declname = spec['type'][-1].names[0] + del spec['type'][-1] + + for decl in decls: + assert decl['decl'] is not None + if is_typedef: + declaration = c_ast.Typedef( + name=None, + quals=spec['qual'], + storage=spec['storage'], + type=decl['decl'], + coord=decl['decl'].coord) + else: + declaration = c_ast.Decl( + name=None, + quals=spec['qual'], + align=spec['alignment'], + storage=spec['storage'], + funcspec=spec['function'], + type=decl['decl'], + init=decl.get('init'), + bitsize=decl.get('bitsize'), + coord=decl['decl'].coord) + + if isinstance(declaration.type, ( + c_ast.Enum, c_ast.Struct, c_ast.Union, + c_ast.IdentifierType)): + fixed_decl = declaration + else: + fixed_decl = self._fix_decl_name_type(declaration, spec['type']) + + # Add the type name defined by typedef to a + # symbol table (for usage in the lexer) + if typedef_namespace: + if is_typedef: + self._add_typedef_name(fixed_decl.name, fixed_decl.coord) + else: + self._add_identifier(fixed_decl.name, fixed_decl.coord) + + fixed_decl = fix_atomic_specifiers(fixed_decl) + declarations.append(fixed_decl) + + return declarations + + def _build_function_definition(self, spec, decl, param_decls, body): + """ Builds a function definition. + """ + if 'typedef' in spec['storage']: + self._parse_error("Invalid typedef", decl.coord) + + declaration = self._build_declarations( + spec=spec, + decls=[dict(decl=decl, init=None)], + typedef_namespace=True)[0] + + return c_ast.FuncDef( + decl=declaration, + param_decls=param_decls, + body=body, + coord=decl.coord) + + def _select_struct_union_class(self, token): + """ Given a token (either STRUCT or UNION), selects the + appropriate AST class. + """ + if token == 'struct': + return c_ast.Struct + else: + return c_ast.Union + + ## + ## Precedence and associativity of operators + ## + # If this changes, c_generator.CGenerator.precedence_map needs to change as + # well + precedence = ( + ('left', 'LOR'), + ('left', 'LAND'), + ('left', 'OR'), + ('left', 'XOR'), + ('left', 'AND'), + ('left', 'EQ', 'NE'), + ('left', 'GT', 'GE', 'LT', 'LE'), + ('left', 'RSHIFT', 'LSHIFT'), + ('left', 'PLUS', 'MINUS'), + ('left', 'TIMES', 'DIVIDE', 'MOD') + ) + + ## + ## Grammar productions + ## Implementation of the BNF defined in K&R2 A.13 + ## + + # Wrapper around a translation unit, to allow for empty input. + # Not strictly part of the C99 Grammar, but useful in practice. + def p_translation_unit_or_empty(self, p): + """ translation_unit_or_empty : translation_unit + | empty + """ + if p[1] is None: + p[0] = c_ast.FileAST([]) + else: + p[0] = c_ast.FileAST(p[1]) + + def p_translation_unit_1(self, p): + """ translation_unit : external_declaration + """ + # Note: external_declaration is already a list + p[0] = p[1] + + def p_translation_unit_2(self, p): + """ translation_unit : translation_unit external_declaration + """ + p[1].extend(p[2]) + p[0] = p[1] + + # Declarations always come as lists (because they can be + # several in one line), so we wrap the function definition + # into a list as well, to make the return value of + # external_declaration homogeneous. + def p_external_declaration_1(self, p): + """ external_declaration : function_definition + """ + p[0] = [p[1]] + + def p_external_declaration_2(self, p): + """ external_declaration : declaration + """ + p[0] = p[1] + + def p_external_declaration_3(self, p): + """ external_declaration : pp_directive + | pppragma_directive + """ + p[0] = [p[1]] + + def p_external_declaration_4(self, p): + """ external_declaration : SEMI + """ + p[0] = [] + + def p_external_declaration_5(self, p): + """ external_declaration : static_assert + """ + p[0] = p[1] + + def p_static_assert_declaration(self, p): + """ static_assert : _STATIC_ASSERT LPAREN constant_expression COMMA unified_string_literal RPAREN + | _STATIC_ASSERT LPAREN constant_expression RPAREN + """ + if len(p) == 5: + p[0] = [c_ast.StaticAssert(p[3], None, self._token_coord(p, 1))] + else: + p[0] = [c_ast.StaticAssert(p[3], p[5], self._token_coord(p, 1))] + + def p_pp_directive(self, p): + """ pp_directive : PPHASH + """ + self._parse_error('Directives not supported yet', + self._token_coord(p, 1)) + + # This encompasses two types of C99-compatible pragmas: + # - The #pragma directive: + # # pragma character_sequence + # - The _Pragma unary operator: + # _Pragma ( " string_literal " ) + def p_pppragma_directive(self, p): + """ pppragma_directive : PPPRAGMA + | PPPRAGMA PPPRAGMASTR + | _PRAGMA LPAREN unified_string_literal RPAREN + """ + if len(p) == 5: + p[0] = c_ast.Pragma(p[3], self._token_coord(p, 2)) + elif len(p) == 3: + p[0] = c_ast.Pragma(p[2], self._token_coord(p, 2)) + else: + p[0] = c_ast.Pragma("", self._token_coord(p, 1)) + + def p_pppragma_directive_list(self, p): + """ pppragma_directive_list : pppragma_directive + | pppragma_directive_list pppragma_directive + """ + p[0] = [p[1]] if len(p) == 2 else p[1] + [p[2]] + + # In function definitions, the declarator can be followed by + # a declaration list, for old "K&R style" function definitios. + def p_function_definition_1(self, p): + """ function_definition : id_declarator declaration_list_opt compound_statement + """ + # no declaration specifiers - 'int' becomes the default type + spec = dict( + qual=[], + alignment=[], + storage=[], + type=[c_ast.IdentifierType(['int'], + coord=self._token_coord(p, 1))], + function=[]) + + p[0] = self._build_function_definition( + spec=spec, + decl=p[1], + param_decls=p[2], + body=p[3]) + + def p_function_definition_2(self, p): + """ function_definition : declaration_specifiers id_declarator declaration_list_opt compound_statement + """ + spec = p[1] + + p[0] = self._build_function_definition( + spec=spec, + decl=p[2], + param_decls=p[3], + body=p[4]) + + # Note, according to C18 A.2.2 6.7.10 static_assert-declaration _Static_assert + # is a declaration, not a statement. We additionally recognise it as a statement + # to fix parsing of _Static_assert inside the functions. + # + def p_statement(self, p): + """ statement : labeled_statement + | expression_statement + | compound_statement + | selection_statement + | iteration_statement + | jump_statement + | pppragma_directive + | static_assert + """ + p[0] = p[1] + + # A pragma is generally considered a decorator rather than an actual + # statement. Still, for the purposes of analyzing an abstract syntax tree of + # C code, pragma's should not be ignored and were previously treated as a + # statement. This presents a problem for constructs that take a statement + # such as labeled_statements, selection_statements, and + # iteration_statements, causing a misleading structure in the AST. For + # example, consider the following C code. + # + # for (int i = 0; i < 3; i++) + # #pragma omp critical + # sum += 1; + # + # This code will compile and execute "sum += 1;" as the body of the for + # loop. Previous implementations of PyCParser would render the AST for this + # block of code as follows: + # + # For: + # DeclList: + # Decl: i, [], [], [] + # TypeDecl: i, [] + # IdentifierType: ['int'] + # Constant: int, 0 + # BinaryOp: < + # ID: i + # Constant: int, 3 + # UnaryOp: p++ + # ID: i + # Pragma: omp critical + # Assignment: += + # ID: sum + # Constant: int, 1 + # + # This AST misleadingly takes the Pragma as the body of the loop and the + # assignment then becomes a sibling of the loop. + # + # To solve edge cases like these, the pragmacomp_or_statement rule groups + # a pragma and its following statement (which would otherwise be orphaned) + # using a compound block, effectively turning the above code into: + # + # for (int i = 0; i < 3; i++) { + # #pragma omp critical + # sum += 1; + # } + def p_pragmacomp_or_statement(self, p): + """ pragmacomp_or_statement : pppragma_directive_list statement + | statement + """ + if len(p) == 3: + p[0] = c_ast.Compound( + block_items=p[1]+[p[2]], + coord=self._token_coord(p, 1)) + else: + p[0] = p[1] + + # In C, declarations can come several in a line: + # int x, *px, romulo = 5; + # + # However, for the AST, we will split them to separate Decl + # nodes. + # + # This rule splits its declarations and always returns a list + # of Decl nodes, even if it's one element long. + # + def p_decl_body(self, p): + """ decl_body : declaration_specifiers init_declarator_list_opt + | declaration_specifiers_no_type id_init_declarator_list_opt + """ + spec = p[1] + + # p[2] (init_declarator_list_opt) is either a list or None + # + if p[2] is None: + # By the standard, you must have at least one declarator unless + # declaring a structure tag, a union tag, or the members of an + # enumeration. + # + ty = spec['type'] + s_u_or_e = (c_ast.Struct, c_ast.Union, c_ast.Enum) + if len(ty) == 1 and isinstance(ty[0], s_u_or_e): + decls = [c_ast.Decl( + name=None, + quals=spec['qual'], + align=spec['alignment'], + storage=spec['storage'], + funcspec=spec['function'], + type=ty[0], + init=None, + bitsize=None, + coord=ty[0].coord)] + + # However, this case can also occur on redeclared identifiers in + # an inner scope. The trouble is that the redeclared type's name + # gets grouped into declaration_specifiers; _build_declarations + # compensates for this. + # + else: + decls = self._build_declarations( + spec=spec, + decls=[dict(decl=None, init=None)], + typedef_namespace=True) + + else: + decls = self._build_declarations( + spec=spec, + decls=p[2], + typedef_namespace=True) + + p[0] = decls + + # The declaration has been split to a decl_body sub-rule and + # SEMI, because having them in a single rule created a problem + # for defining typedefs. + # + # If a typedef line was directly followed by a line using the + # type defined with the typedef, the type would not be + # recognized. This is because to reduce the declaration rule, + # the parser's lookahead asked for the token after SEMI, which + # was the type from the next line, and the lexer had no chance + # to see the updated type symbol table. + # + # Splitting solves this problem, because after seeing SEMI, + # the parser reduces decl_body, which actually adds the new + # type into the table to be seen by the lexer before the next + # line is reached. + def p_declaration(self, p): + """ declaration : decl_body SEMI + """ + p[0] = p[1] + + # Since each declaration is a list of declarations, this + # rule will combine all the declarations and return a single + # list + # + def p_declaration_list(self, p): + """ declaration_list : declaration + | declaration_list declaration + """ + p[0] = p[1] if len(p) == 2 else p[1] + p[2] + + # To know when declaration-specifiers end and declarators begin, + # we require declaration-specifiers to have at least one + # type-specifier, and disallow typedef-names after we've seen any + # type-specifier. These are both required by the spec. + # + def p_declaration_specifiers_no_type_1(self, p): + """ declaration_specifiers_no_type : type_qualifier declaration_specifiers_no_type_opt + """ + p[0] = self._add_declaration_specifier(p[2], p[1], 'qual') + + def p_declaration_specifiers_no_type_2(self, p): + """ declaration_specifiers_no_type : storage_class_specifier declaration_specifiers_no_type_opt + """ + p[0] = self._add_declaration_specifier(p[2], p[1], 'storage') + + def p_declaration_specifiers_no_type_3(self, p): + """ declaration_specifiers_no_type : function_specifier declaration_specifiers_no_type_opt + """ + p[0] = self._add_declaration_specifier(p[2], p[1], 'function') + + # Without this, `typedef _Atomic(T) U` will parse incorrectly because the + # _Atomic qualifier will match, instead of the specifier. + def p_declaration_specifiers_no_type_4(self, p): + """ declaration_specifiers_no_type : atomic_specifier declaration_specifiers_no_type_opt + """ + p[0] = self._add_declaration_specifier(p[2], p[1], 'type') + + def p_declaration_specifiers_no_type_5(self, p): + """ declaration_specifiers_no_type : alignment_specifier declaration_specifiers_no_type_opt + """ + p[0] = self._add_declaration_specifier(p[2], p[1], 'alignment') + + def p_declaration_specifiers_1(self, p): + """ declaration_specifiers : declaration_specifiers type_qualifier + """ + p[0] = self._add_declaration_specifier(p[1], p[2], 'qual', append=True) + + def p_declaration_specifiers_2(self, p): + """ declaration_specifiers : declaration_specifiers storage_class_specifier + """ + p[0] = self._add_declaration_specifier(p[1], p[2], 'storage', append=True) + + def p_declaration_specifiers_3(self, p): + """ declaration_specifiers : declaration_specifiers function_specifier + """ + p[0] = self._add_declaration_specifier(p[1], p[2], 'function', append=True) + + def p_declaration_specifiers_4(self, p): + """ declaration_specifiers : declaration_specifiers type_specifier_no_typeid + """ + p[0] = self._add_declaration_specifier(p[1], p[2], 'type', append=True) + + def p_declaration_specifiers_5(self, p): + """ declaration_specifiers : type_specifier + """ + p[0] = self._add_declaration_specifier(None, p[1], 'type') + + def p_declaration_specifiers_6(self, p): + """ declaration_specifiers : declaration_specifiers_no_type type_specifier + """ + p[0] = self._add_declaration_specifier(p[1], p[2], 'type', append=True) + + def p_declaration_specifiers_7(self, p): + """ declaration_specifiers : declaration_specifiers alignment_specifier + """ + p[0] = self._add_declaration_specifier(p[1], p[2], 'alignment', append=True) + + def p_storage_class_specifier(self, p): + """ storage_class_specifier : AUTO + | REGISTER + | STATIC + | EXTERN + | TYPEDEF + | _THREAD_LOCAL + """ + p[0] = p[1] + + def p_function_specifier(self, p): + """ function_specifier : INLINE + | _NORETURN + """ + p[0] = p[1] + + def p_type_specifier_no_typeid(self, p): + """ type_specifier_no_typeid : VOID + | _BOOL + | CHAR + | SHORT + | INT + | LONG + | FLOAT + | DOUBLE + | _COMPLEX + | SIGNED + | UNSIGNED + | __INT128 + """ + p[0] = c_ast.IdentifierType([p[1]], coord=self._token_coord(p, 1)) + + def p_type_specifier(self, p): + """ type_specifier : typedef_name + | enum_specifier + | struct_or_union_specifier + | type_specifier_no_typeid + | atomic_specifier + """ + p[0] = p[1] + + # See section 6.7.2.4 of the C11 standard. + def p_atomic_specifier(self, p): + """ atomic_specifier : _ATOMIC LPAREN type_name RPAREN + """ + typ = p[3] + typ.quals.append('_Atomic') + p[0] = typ + + def p_type_qualifier(self, p): + """ type_qualifier : CONST + | RESTRICT + | VOLATILE + | _ATOMIC + """ + p[0] = p[1] + + def p_init_declarator_list(self, p): + """ init_declarator_list : init_declarator + | init_declarator_list COMMA init_declarator + """ + p[0] = p[1] + [p[3]] if len(p) == 4 else [p[1]] + + # Returns a {decl= : init=} dictionary + # If there's no initializer, uses None + # + def p_init_declarator(self, p): + """ init_declarator : declarator + | declarator EQUALS initializer + """ + p[0] = dict(decl=p[1], init=(p[3] if len(p) > 2 else None)) + + def p_id_init_declarator_list(self, p): + """ id_init_declarator_list : id_init_declarator + | id_init_declarator_list COMMA init_declarator + """ + p[0] = p[1] + [p[3]] if len(p) == 4 else [p[1]] + + def p_id_init_declarator(self, p): + """ id_init_declarator : id_declarator + | id_declarator EQUALS initializer + """ + p[0] = dict(decl=p[1], init=(p[3] if len(p) > 2 else None)) + + # Require at least one type specifier in a specifier-qualifier-list + # + def p_specifier_qualifier_list_1(self, p): + """ specifier_qualifier_list : specifier_qualifier_list type_specifier_no_typeid + """ + p[0] = self._add_declaration_specifier(p[1], p[2], 'type', append=True) + + def p_specifier_qualifier_list_2(self, p): + """ specifier_qualifier_list : specifier_qualifier_list type_qualifier + """ + p[0] = self._add_declaration_specifier(p[1], p[2], 'qual', append=True) + + def p_specifier_qualifier_list_3(self, p): + """ specifier_qualifier_list : type_specifier + """ + p[0] = self._add_declaration_specifier(None, p[1], 'type') + + def p_specifier_qualifier_list_4(self, p): + """ specifier_qualifier_list : type_qualifier_list type_specifier + """ + p[0] = dict(qual=p[1], alignment=[], storage=[], type=[p[2]], function=[]) + + def p_specifier_qualifier_list_5(self, p): + """ specifier_qualifier_list : alignment_specifier + """ + p[0] = dict(qual=[], alignment=[p[1]], storage=[], type=[], function=[]) + + def p_specifier_qualifier_list_6(self, p): + """ specifier_qualifier_list : specifier_qualifier_list alignment_specifier + """ + p[0] = self._add_declaration_specifier(p[1], p[2], 'alignment') + + # TYPEID is allowed here (and in other struct/enum related tag names), because + # struct/enum tags reside in their own namespace and can be named the same as types + # + def p_struct_or_union_specifier_1(self, p): + """ struct_or_union_specifier : struct_or_union ID + | struct_or_union TYPEID + """ + klass = self._select_struct_union_class(p[1]) + # None means no list of members + p[0] = klass( + name=p[2], + decls=None, + coord=self._token_coord(p, 2)) + + def p_struct_or_union_specifier_2(self, p): + """ struct_or_union_specifier : struct_or_union brace_open struct_declaration_list brace_close + | struct_or_union brace_open brace_close + """ + klass = self._select_struct_union_class(p[1]) + if len(p) == 4: + # Empty sequence means an empty list of members + p[0] = klass( + name=None, + decls=[], + coord=self._token_coord(p, 2)) + else: + p[0] = klass( + name=None, + decls=p[3], + coord=self._token_coord(p, 2)) + + + def p_struct_or_union_specifier_3(self, p): + """ struct_or_union_specifier : struct_or_union ID brace_open struct_declaration_list brace_close + | struct_or_union ID brace_open brace_close + | struct_or_union TYPEID brace_open struct_declaration_list brace_close + | struct_or_union TYPEID brace_open brace_close + """ + klass = self._select_struct_union_class(p[1]) + if len(p) == 5: + # Empty sequence means an empty list of members + p[0] = klass( + name=p[2], + decls=[], + coord=self._token_coord(p, 2)) + else: + p[0] = klass( + name=p[2], + decls=p[4], + coord=self._token_coord(p, 2)) + + def p_struct_or_union(self, p): + """ struct_or_union : STRUCT + | UNION + """ + p[0] = p[1] + + # Combine all declarations into a single list + # + def p_struct_declaration_list(self, p): + """ struct_declaration_list : struct_declaration + | struct_declaration_list struct_declaration + """ + if len(p) == 2: + p[0] = p[1] or [] + else: + p[0] = p[1] + (p[2] or []) + + def p_struct_declaration_1(self, p): + """ struct_declaration : specifier_qualifier_list struct_declarator_list_opt SEMI + """ + spec = p[1] + assert 'typedef' not in spec['storage'] + + if p[2] is not None: + decls = self._build_declarations( + spec=spec, + decls=p[2]) + + elif len(spec['type']) == 1: + # Anonymous struct/union, gcc extension, C1x feature. + # Although the standard only allows structs/unions here, I see no + # reason to disallow other types since some compilers have typedefs + # here, and pycparser isn't about rejecting all invalid code. + # + node = spec['type'][0] + if isinstance(node, c_ast.Node): + decl_type = node + else: + decl_type = c_ast.IdentifierType(node) + + decls = self._build_declarations( + spec=spec, + decls=[dict(decl=decl_type)]) + + else: + # Structure/union members can have the same names as typedefs. + # The trouble is that the member's name gets grouped into + # specifier_qualifier_list; _build_declarations compensates. + # + decls = self._build_declarations( + spec=spec, + decls=[dict(decl=None, init=None)]) + + p[0] = decls + + def p_struct_declaration_2(self, p): + """ struct_declaration : SEMI + """ + p[0] = None + + def p_struct_declaration_3(self, p): + """ struct_declaration : pppragma_directive + """ + p[0] = [p[1]] + + def p_struct_declarator_list(self, p): + """ struct_declarator_list : struct_declarator + | struct_declarator_list COMMA struct_declarator + """ + p[0] = p[1] + [p[3]] if len(p) == 4 else [p[1]] + + # struct_declarator passes up a dict with the keys: decl (for + # the underlying declarator) and bitsize (for the bitsize) + # + def p_struct_declarator_1(self, p): + """ struct_declarator : declarator + """ + p[0] = {'decl': p[1], 'bitsize': None} + + def p_struct_declarator_2(self, p): + """ struct_declarator : declarator COLON constant_expression + | COLON constant_expression + """ + if len(p) > 3: + p[0] = {'decl': p[1], 'bitsize': p[3]} + else: + p[0] = {'decl': c_ast.TypeDecl(None, None, None, None), 'bitsize': p[2]} + + def p_enum_specifier_1(self, p): + """ enum_specifier : ENUM ID + | ENUM TYPEID + """ + p[0] = c_ast.Enum(p[2], None, self._token_coord(p, 1)) + + def p_enum_specifier_2(self, p): + """ enum_specifier : ENUM brace_open enumerator_list brace_close + """ + p[0] = c_ast.Enum(None, p[3], self._token_coord(p, 1)) + + def p_enum_specifier_3(self, p): + """ enum_specifier : ENUM ID brace_open enumerator_list brace_close + | ENUM TYPEID brace_open enumerator_list brace_close + """ + p[0] = c_ast.Enum(p[2], p[4], self._token_coord(p, 1)) + + def p_enumerator_list(self, p): + """ enumerator_list : enumerator + | enumerator_list COMMA + | enumerator_list COMMA enumerator + """ + if len(p) == 2: + p[0] = c_ast.EnumeratorList([p[1]], p[1].coord) + elif len(p) == 3: + p[0] = p[1] + else: + p[1].enumerators.append(p[3]) + p[0] = p[1] + + def p_alignment_specifier(self, p): + """ alignment_specifier : _ALIGNAS LPAREN type_name RPAREN + | _ALIGNAS LPAREN constant_expression RPAREN + """ + p[0] = c_ast.Alignas(p[3], self._token_coord(p, 1)) + + def p_enumerator(self, p): + """ enumerator : ID + | ID EQUALS constant_expression + """ + if len(p) == 2: + enumerator = c_ast.Enumerator( + p[1], None, + self._token_coord(p, 1)) + else: + enumerator = c_ast.Enumerator( + p[1], p[3], + self._token_coord(p, 1)) + self._add_identifier(enumerator.name, enumerator.coord) + + p[0] = enumerator + + def p_declarator(self, p): + """ declarator : id_declarator + | typeid_declarator + """ + p[0] = p[1] + + @parameterized(('id', 'ID'), ('typeid', 'TYPEID'), ('typeid_noparen', 'TYPEID')) + def p_xxx_declarator_1(self, p): + """ xxx_declarator : direct_xxx_declarator + """ + p[0] = p[1] + + @parameterized(('id', 'ID'), ('typeid', 'TYPEID'), ('typeid_noparen', 'TYPEID')) + def p_xxx_declarator_2(self, p): + """ xxx_declarator : pointer direct_xxx_declarator + """ + p[0] = self._type_modify_decl(p[2], p[1]) + + @parameterized(('id', 'ID'), ('typeid', 'TYPEID'), ('typeid_noparen', 'TYPEID')) + def p_direct_xxx_declarator_1(self, p): + """ direct_xxx_declarator : yyy + """ + p[0] = c_ast.TypeDecl( + declname=p[1], + type=None, + quals=None, + align=None, + coord=self._token_coord(p, 1)) + + @parameterized(('id', 'ID'), ('typeid', 'TYPEID')) + def p_direct_xxx_declarator_2(self, p): + """ direct_xxx_declarator : LPAREN xxx_declarator RPAREN + """ + p[0] = p[2] + + @parameterized(('id', 'ID'), ('typeid', 'TYPEID'), ('typeid_noparen', 'TYPEID')) + def p_direct_xxx_declarator_3(self, p): + """ direct_xxx_declarator : direct_xxx_declarator LBRACKET type_qualifier_list_opt assignment_expression_opt RBRACKET + """ + quals = (p[3] if len(p) > 5 else []) or [] + # Accept dimension qualifiers + # Per C99 6.7.5.3 p7 + arr = c_ast.ArrayDecl( + type=None, + dim=p[4] if len(p) > 5 else p[3], + dim_quals=quals, + coord=p[1].coord) + + p[0] = self._type_modify_decl(decl=p[1], modifier=arr) + + @parameterized(('id', 'ID'), ('typeid', 'TYPEID'), ('typeid_noparen', 'TYPEID')) + def p_direct_xxx_declarator_4(self, p): + """ direct_xxx_declarator : direct_xxx_declarator LBRACKET STATIC type_qualifier_list_opt assignment_expression RBRACKET + | direct_xxx_declarator LBRACKET type_qualifier_list STATIC assignment_expression RBRACKET + """ + # Using slice notation for PLY objects doesn't work in Python 3 for the + # version of PLY embedded with pycparser; see PLY Google Code issue 30. + # Work around that here by listing the two elements separately. + listed_quals = [item if isinstance(item, list) else [item] + for item in [p[3],p[4]]] + dim_quals = [qual for sublist in listed_quals for qual in sublist + if qual is not None] + arr = c_ast.ArrayDecl( + type=None, + dim=p[5], + dim_quals=dim_quals, + coord=p[1].coord) + + p[0] = self._type_modify_decl(decl=p[1], modifier=arr) + + # Special for VLAs + # + @parameterized(('id', 'ID'), ('typeid', 'TYPEID'), ('typeid_noparen', 'TYPEID')) + def p_direct_xxx_declarator_5(self, p): + """ direct_xxx_declarator : direct_xxx_declarator LBRACKET type_qualifier_list_opt TIMES RBRACKET + """ + arr = c_ast.ArrayDecl( + type=None, + dim=c_ast.ID(p[4], self._token_coord(p, 4)), + dim_quals=p[3] if p[3] is not None else [], + coord=p[1].coord) + + p[0] = self._type_modify_decl(decl=p[1], modifier=arr) + + @parameterized(('id', 'ID'), ('typeid', 'TYPEID'), ('typeid_noparen', 'TYPEID')) + def p_direct_xxx_declarator_6(self, p): + """ direct_xxx_declarator : direct_xxx_declarator LPAREN parameter_type_list RPAREN + | direct_xxx_declarator LPAREN identifier_list_opt RPAREN + """ + func = c_ast.FuncDecl( + args=p[3], + type=None, + coord=p[1].coord) + + # To see why _get_yacc_lookahead_token is needed, consider: + # typedef char TT; + # void foo(int TT) { TT = 10; } + # Outside the function, TT is a typedef, but inside (starting and + # ending with the braces) it's a parameter. The trouble begins with + # yacc's lookahead token. We don't know if we're declaring or + # defining a function until we see LBRACE, but if we wait for yacc to + # trigger a rule on that token, then TT will have already been read + # and incorrectly interpreted as TYPEID. We need to add the + # parameters to the scope the moment the lexer sees LBRACE. + # + if self._get_yacc_lookahead_token().type == "LBRACE": + if func.args is not None: + for param in func.args.params: + if isinstance(param, c_ast.EllipsisParam): break + self._add_identifier(param.name, param.coord) + + p[0] = self._type_modify_decl(decl=p[1], modifier=func) + + def p_pointer(self, p): + """ pointer : TIMES type_qualifier_list_opt + | TIMES type_qualifier_list_opt pointer + """ + coord = self._token_coord(p, 1) + # Pointer decls nest from inside out. This is important when different + # levels have different qualifiers. For example: + # + # char * const * p; + # + # Means "pointer to const pointer to char" + # + # While: + # + # char ** const p; + # + # Means "const pointer to pointer to char" + # + # So when we construct PtrDecl nestings, the leftmost pointer goes in + # as the most nested type. + nested_type = c_ast.PtrDecl(quals=p[2] or [], type=None, coord=coord) + if len(p) > 3: + tail_type = p[3] + while tail_type.type is not None: + tail_type = tail_type.type + tail_type.type = nested_type + p[0] = p[3] + else: + p[0] = nested_type + + def p_type_qualifier_list(self, p): + """ type_qualifier_list : type_qualifier + | type_qualifier_list type_qualifier + """ + p[0] = [p[1]] if len(p) == 2 else p[1] + [p[2]] + + def p_parameter_type_list(self, p): + """ parameter_type_list : parameter_list + | parameter_list COMMA ELLIPSIS + """ + if len(p) > 2: + p[1].params.append(c_ast.EllipsisParam(self._token_coord(p, 3))) + + p[0] = p[1] + + def p_parameter_list(self, p): + """ parameter_list : parameter_declaration + | parameter_list COMMA parameter_declaration + """ + if len(p) == 2: # single parameter + p[0] = c_ast.ParamList([p[1]], p[1].coord) + else: + p[1].params.append(p[3]) + p[0] = p[1] + + # From ISO/IEC 9899:TC2, 6.7.5.3.11: + # "If, in a parameter declaration, an identifier can be treated either + # as a typedef name or as a parameter name, it shall be taken as a + # typedef name." + # + # Inside a parameter declaration, once we've reduced declaration specifiers, + # if we shift in an LPAREN and see a TYPEID, it could be either an abstract + # declarator or a declarator nested inside parens. This rule tells us to + # always treat it as an abstract declarator. Therefore, we only accept + # `id_declarator`s and `typeid_noparen_declarator`s. + def p_parameter_declaration_1(self, p): + """ parameter_declaration : declaration_specifiers id_declarator + | declaration_specifiers typeid_noparen_declarator + """ + spec = p[1] + if not spec['type']: + spec['type'] = [c_ast.IdentifierType(['int'], + coord=self._token_coord(p, 1))] + p[0] = self._build_declarations( + spec=spec, + decls=[dict(decl=p[2])])[0] + + def p_parameter_declaration_2(self, p): + """ parameter_declaration : declaration_specifiers abstract_declarator_opt + """ + spec = p[1] + if not spec['type']: + spec['type'] = [c_ast.IdentifierType(['int'], + coord=self._token_coord(p, 1))] + + # Parameters can have the same names as typedefs. The trouble is that + # the parameter's name gets grouped into declaration_specifiers, making + # it look like an old-style declaration; compensate. + # + if len(spec['type']) > 1 and len(spec['type'][-1].names) == 1 and \ + self._is_type_in_scope(spec['type'][-1].names[0]): + decl = self._build_declarations( + spec=spec, + decls=[dict(decl=p[2], init=None)])[0] + + # This truly is an old-style parameter declaration + # + else: + decl = c_ast.Typename( + name='', + quals=spec['qual'], + align=None, + type=p[2] or c_ast.TypeDecl(None, None, None, None), + coord=self._token_coord(p, 2)) + typename = spec['type'] + decl = self._fix_decl_name_type(decl, typename) + + p[0] = decl + + def p_identifier_list(self, p): + """ identifier_list : identifier + | identifier_list COMMA identifier + """ + if len(p) == 2: # single parameter + p[0] = c_ast.ParamList([p[1]], p[1].coord) + else: + p[1].params.append(p[3]) + p[0] = p[1] + + def p_initializer_1(self, p): + """ initializer : assignment_expression + """ + p[0] = p[1] + + def p_initializer_2(self, p): + """ initializer : brace_open initializer_list_opt brace_close + | brace_open initializer_list COMMA brace_close + """ + if p[2] is None: + p[0] = c_ast.InitList([], self._token_coord(p, 1)) + else: + p[0] = p[2] + + def p_initializer_list(self, p): + """ initializer_list : designation_opt initializer + | initializer_list COMMA designation_opt initializer + """ + if len(p) == 3: # single initializer + init = p[2] if p[1] is None else c_ast.NamedInitializer(p[1], p[2]) + p[0] = c_ast.InitList([init], p[2].coord) + else: + init = p[4] if p[3] is None else c_ast.NamedInitializer(p[3], p[4]) + p[1].exprs.append(init) + p[0] = p[1] + + def p_designation(self, p): + """ designation : designator_list EQUALS + """ + p[0] = p[1] + + # Designators are represented as a list of nodes, in the order in which + # they're written in the code. + # + def p_designator_list(self, p): + """ designator_list : designator + | designator_list designator + """ + p[0] = [p[1]] if len(p) == 2 else p[1] + [p[2]] + + def p_designator(self, p): + """ designator : LBRACKET constant_expression RBRACKET + | PERIOD identifier + """ + p[0] = p[2] + + def p_type_name(self, p): + """ type_name : specifier_qualifier_list abstract_declarator_opt + """ + typename = c_ast.Typename( + name='', + quals=p[1]['qual'][:], + align=None, + type=p[2] or c_ast.TypeDecl(None, None, None, None), + coord=self._token_coord(p, 2)) + + p[0] = self._fix_decl_name_type(typename, p[1]['type']) + + def p_abstract_declarator_1(self, p): + """ abstract_declarator : pointer + """ + dummytype = c_ast.TypeDecl(None, None, None, None) + p[0] = self._type_modify_decl( + decl=dummytype, + modifier=p[1]) + + def p_abstract_declarator_2(self, p): + """ abstract_declarator : pointer direct_abstract_declarator + """ + p[0] = self._type_modify_decl(p[2], p[1]) + + def p_abstract_declarator_3(self, p): + """ abstract_declarator : direct_abstract_declarator + """ + p[0] = p[1] + + # Creating and using direct_abstract_declarator_opt here + # instead of listing both direct_abstract_declarator and the + # lack of it in the beginning of _1 and _2 caused two + # shift/reduce errors. + # + def p_direct_abstract_declarator_1(self, p): + """ direct_abstract_declarator : LPAREN abstract_declarator RPAREN """ + p[0] = p[2] + + def p_direct_abstract_declarator_2(self, p): + """ direct_abstract_declarator : direct_abstract_declarator LBRACKET assignment_expression_opt RBRACKET + """ + arr = c_ast.ArrayDecl( + type=None, + dim=p[3], + dim_quals=[], + coord=p[1].coord) + + p[0] = self._type_modify_decl(decl=p[1], modifier=arr) + + def p_direct_abstract_declarator_3(self, p): + """ direct_abstract_declarator : LBRACKET type_qualifier_list_opt assignment_expression_opt RBRACKET + """ + quals = (p[2] if len(p) > 4 else []) or [] + p[0] = c_ast.ArrayDecl( + type=c_ast.TypeDecl(None, None, None, None), + dim=p[3] if len(p) > 4 else p[2], + dim_quals=quals, + coord=self._token_coord(p, 1)) + + def p_direct_abstract_declarator_4(self, p): + """ direct_abstract_declarator : direct_abstract_declarator LBRACKET TIMES RBRACKET + """ + arr = c_ast.ArrayDecl( + type=None, + dim=c_ast.ID(p[3], self._token_coord(p, 3)), + dim_quals=[], + coord=p[1].coord) + + p[0] = self._type_modify_decl(decl=p[1], modifier=arr) + + def p_direct_abstract_declarator_5(self, p): + """ direct_abstract_declarator : LBRACKET TIMES RBRACKET + """ + p[0] = c_ast.ArrayDecl( + type=c_ast.TypeDecl(None, None, None, None), + dim=c_ast.ID(p[3], self._token_coord(p, 3)), + dim_quals=[], + coord=self._token_coord(p, 1)) + + def p_direct_abstract_declarator_6(self, p): + """ direct_abstract_declarator : direct_abstract_declarator LPAREN parameter_type_list_opt RPAREN + """ + func = c_ast.FuncDecl( + args=p[3], + type=None, + coord=p[1].coord) + + p[0] = self._type_modify_decl(decl=p[1], modifier=func) + + def p_direct_abstract_declarator_7(self, p): + """ direct_abstract_declarator : LPAREN parameter_type_list_opt RPAREN + """ + p[0] = c_ast.FuncDecl( + args=p[2], + type=c_ast.TypeDecl(None, None, None, None), + coord=self._token_coord(p, 1)) + + # declaration is a list, statement isn't. To make it consistent, block_item + # will always be a list + # + def p_block_item(self, p): + """ block_item : declaration + | statement + """ + p[0] = p[1] if isinstance(p[1], list) else [p[1]] + + # Since we made block_item a list, this just combines lists + # + def p_block_item_list(self, p): + """ block_item_list : block_item + | block_item_list block_item + """ + # Empty block items (plain ';') produce [None], so ignore them + p[0] = p[1] if (len(p) == 2 or p[2] == [None]) else p[1] + p[2] + + def p_compound_statement_1(self, p): + """ compound_statement : brace_open block_item_list_opt brace_close """ + p[0] = c_ast.Compound( + block_items=p[2], + coord=self._token_coord(p, 1)) + + def p_labeled_statement_1(self, p): + """ labeled_statement : ID COLON pragmacomp_or_statement """ + p[0] = c_ast.Label(p[1], p[3], self._token_coord(p, 1)) + + def p_labeled_statement_2(self, p): + """ labeled_statement : CASE constant_expression COLON pragmacomp_or_statement """ + p[0] = c_ast.Case(p[2], [p[4]], self._token_coord(p, 1)) + + def p_labeled_statement_3(self, p): + """ labeled_statement : DEFAULT COLON pragmacomp_or_statement """ + p[0] = c_ast.Default([p[3]], self._token_coord(p, 1)) + + def p_selection_statement_1(self, p): + """ selection_statement : IF LPAREN expression RPAREN pragmacomp_or_statement """ + p[0] = c_ast.If(p[3], p[5], None, self._token_coord(p, 1)) + + def p_selection_statement_2(self, p): + """ selection_statement : IF LPAREN expression RPAREN statement ELSE pragmacomp_or_statement """ + p[0] = c_ast.If(p[3], p[5], p[7], self._token_coord(p, 1)) + + def p_selection_statement_3(self, p): + """ selection_statement : SWITCH LPAREN expression RPAREN pragmacomp_or_statement """ + p[0] = fix_switch_cases( + c_ast.Switch(p[3], p[5], self._token_coord(p, 1))) + + def p_iteration_statement_1(self, p): + """ iteration_statement : WHILE LPAREN expression RPAREN pragmacomp_or_statement """ + p[0] = c_ast.While(p[3], p[5], self._token_coord(p, 1)) + + def p_iteration_statement_2(self, p): + """ iteration_statement : DO pragmacomp_or_statement WHILE LPAREN expression RPAREN SEMI """ + p[0] = c_ast.DoWhile(p[5], p[2], self._token_coord(p, 1)) + + def p_iteration_statement_3(self, p): + """ iteration_statement : FOR LPAREN expression_opt SEMI expression_opt SEMI expression_opt RPAREN pragmacomp_or_statement """ + p[0] = c_ast.For(p[3], p[5], p[7], p[9], self._token_coord(p, 1)) + + def p_iteration_statement_4(self, p): + """ iteration_statement : FOR LPAREN declaration expression_opt SEMI expression_opt RPAREN pragmacomp_or_statement """ + p[0] = c_ast.For(c_ast.DeclList(p[3], self._token_coord(p, 1)), + p[4], p[6], p[8], self._token_coord(p, 1)) + + def p_jump_statement_1(self, p): + """ jump_statement : GOTO ID SEMI """ + p[0] = c_ast.Goto(p[2], self._token_coord(p, 1)) + + def p_jump_statement_2(self, p): + """ jump_statement : BREAK SEMI """ + p[0] = c_ast.Break(self._token_coord(p, 1)) + + def p_jump_statement_3(self, p): + """ jump_statement : CONTINUE SEMI """ + p[0] = c_ast.Continue(self._token_coord(p, 1)) + + def p_jump_statement_4(self, p): + """ jump_statement : RETURN expression SEMI + | RETURN SEMI + """ + p[0] = c_ast.Return(p[2] if len(p) == 4 else None, self._token_coord(p, 1)) + + def p_expression_statement(self, p): + """ expression_statement : expression_opt SEMI """ + if p[1] is None: + p[0] = c_ast.EmptyStatement(self._token_coord(p, 2)) + else: + p[0] = p[1] + + def p_expression(self, p): + """ expression : assignment_expression + | expression COMMA assignment_expression + """ + if len(p) == 2: + p[0] = p[1] + else: + if not isinstance(p[1], c_ast.ExprList): + p[1] = c_ast.ExprList([p[1]], p[1].coord) + + p[1].exprs.append(p[3]) + p[0] = p[1] + + def p_parenthesized_compound_expression(self, p): + """ assignment_expression : LPAREN compound_statement RPAREN """ + p[0] = p[2] + + def p_typedef_name(self, p): + """ typedef_name : TYPEID """ + p[0] = c_ast.IdentifierType([p[1]], coord=self._token_coord(p, 1)) + + def p_assignment_expression(self, p): + """ assignment_expression : conditional_expression + | unary_expression assignment_operator assignment_expression + """ + if len(p) == 2: + p[0] = p[1] + else: + p[0] = c_ast.Assignment(p[2], p[1], p[3], p[1].coord) + + # K&R2 defines these as many separate rules, to encode + # precedence and associativity. Why work hard ? I'll just use + # the built in precedence/associativity specification feature + # of PLY. (see precedence declaration above) + # + def p_assignment_operator(self, p): + """ assignment_operator : EQUALS + | XOREQUAL + | TIMESEQUAL + | DIVEQUAL + | MODEQUAL + | PLUSEQUAL + | MINUSEQUAL + | LSHIFTEQUAL + | RSHIFTEQUAL + | ANDEQUAL + | OREQUAL + """ + p[0] = p[1] + + def p_constant_expression(self, p): + """ constant_expression : conditional_expression """ + p[0] = p[1] + + def p_conditional_expression(self, p): + """ conditional_expression : binary_expression + | binary_expression CONDOP expression COLON conditional_expression + """ + if len(p) == 2: + p[0] = p[1] + else: + p[0] = c_ast.TernaryOp(p[1], p[3], p[5], p[1].coord) + + def p_binary_expression(self, p): + """ binary_expression : cast_expression + | binary_expression TIMES binary_expression + | binary_expression DIVIDE binary_expression + | binary_expression MOD binary_expression + | binary_expression PLUS binary_expression + | binary_expression MINUS binary_expression + | binary_expression RSHIFT binary_expression + | binary_expression LSHIFT binary_expression + | binary_expression LT binary_expression + | binary_expression LE binary_expression + | binary_expression GE binary_expression + | binary_expression GT binary_expression + | binary_expression EQ binary_expression + | binary_expression NE binary_expression + | binary_expression AND binary_expression + | binary_expression OR binary_expression + | binary_expression XOR binary_expression + | binary_expression LAND binary_expression + | binary_expression LOR binary_expression + """ + if len(p) == 2: + p[0] = p[1] + else: + p[0] = c_ast.BinaryOp(p[2], p[1], p[3], p[1].coord) + + def p_cast_expression_1(self, p): + """ cast_expression : unary_expression """ + p[0] = p[1] + + def p_cast_expression_2(self, p): + """ cast_expression : LPAREN type_name RPAREN cast_expression """ + p[0] = c_ast.Cast(p[2], p[4], self._token_coord(p, 1)) + + def p_unary_expression_1(self, p): + """ unary_expression : postfix_expression """ + p[0] = p[1] + + def p_unary_expression_2(self, p): + """ unary_expression : PLUSPLUS unary_expression + | MINUSMINUS unary_expression + | unary_operator cast_expression + """ + p[0] = c_ast.UnaryOp(p[1], p[2], p[2].coord) + + def p_unary_expression_3(self, p): + """ unary_expression : SIZEOF unary_expression + | SIZEOF LPAREN type_name RPAREN + | _ALIGNOF LPAREN type_name RPAREN + """ + p[0] = c_ast.UnaryOp( + p[1], + p[2] if len(p) == 3 else p[3], + self._token_coord(p, 1)) + + def p_unary_operator(self, p): + """ unary_operator : AND + | TIMES + | PLUS + | MINUS + | NOT + | LNOT + """ + p[0] = p[1] + + def p_postfix_expression_1(self, p): + """ postfix_expression : primary_expression """ + p[0] = p[1] + + def p_postfix_expression_2(self, p): + """ postfix_expression : postfix_expression LBRACKET expression RBRACKET """ + p[0] = c_ast.ArrayRef(p[1], p[3], p[1].coord) + + def p_postfix_expression_3(self, p): + """ postfix_expression : postfix_expression LPAREN argument_expression_list RPAREN + | postfix_expression LPAREN RPAREN + """ + p[0] = c_ast.FuncCall(p[1], p[3] if len(p) == 5 else None, p[1].coord) + + def p_postfix_expression_4(self, p): + """ postfix_expression : postfix_expression PERIOD ID + | postfix_expression PERIOD TYPEID + | postfix_expression ARROW ID + | postfix_expression ARROW TYPEID + """ + field = c_ast.ID(p[3], self._token_coord(p, 3)) + p[0] = c_ast.StructRef(p[1], p[2], field, p[1].coord) + + def p_postfix_expression_5(self, p): + """ postfix_expression : postfix_expression PLUSPLUS + | postfix_expression MINUSMINUS + """ + p[0] = c_ast.UnaryOp('p' + p[2], p[1], p[1].coord) + + def p_postfix_expression_6(self, p): + """ postfix_expression : LPAREN type_name RPAREN brace_open initializer_list brace_close + | LPAREN type_name RPAREN brace_open initializer_list COMMA brace_close + """ + p[0] = c_ast.CompoundLiteral(p[2], p[5]) + + def p_primary_expression_1(self, p): + """ primary_expression : identifier """ + p[0] = p[1] + + def p_primary_expression_2(self, p): + """ primary_expression : constant """ + p[0] = p[1] + + def p_primary_expression_3(self, p): + """ primary_expression : unified_string_literal + | unified_wstring_literal + """ + p[0] = p[1] + + def p_primary_expression_4(self, p): + """ primary_expression : LPAREN expression RPAREN """ + p[0] = p[2] + + def p_primary_expression_5(self, p): + """ primary_expression : OFFSETOF LPAREN type_name COMMA offsetof_member_designator RPAREN + """ + coord = self._token_coord(p, 1) + p[0] = c_ast.FuncCall(c_ast.ID(p[1], coord), + c_ast.ExprList([p[3], p[5]], coord), + coord) + + def p_offsetof_member_designator(self, p): + """ offsetof_member_designator : identifier + | offsetof_member_designator PERIOD identifier + | offsetof_member_designator LBRACKET expression RBRACKET + """ + if len(p) == 2: + p[0] = p[1] + elif len(p) == 4: + p[0] = c_ast.StructRef(p[1], p[2], p[3], p[1].coord) + elif len(p) == 5: + p[0] = c_ast.ArrayRef(p[1], p[3], p[1].coord) + else: + raise NotImplementedError("Unexpected parsing state. len(p): %u" % len(p)) + + def p_argument_expression_list(self, p): + """ argument_expression_list : assignment_expression + | argument_expression_list COMMA assignment_expression + """ + if len(p) == 2: # single expr + p[0] = c_ast.ExprList([p[1]], p[1].coord) + else: + p[1].exprs.append(p[3]) + p[0] = p[1] + + def p_identifier(self, p): + """ identifier : ID """ + p[0] = c_ast.ID(p[1], self._token_coord(p, 1)) + + def p_constant_1(self, p): + """ constant : INT_CONST_DEC + | INT_CONST_OCT + | INT_CONST_HEX + | INT_CONST_BIN + | INT_CONST_CHAR + """ + uCount = 0 + lCount = 0 + for x in p[1][-3:]: + if x in ('l', 'L'): + lCount += 1 + elif x in ('u', 'U'): + uCount += 1 + t = '' + if uCount > 1: + raise ValueError('Constant cannot have more than one u/U suffix.') + elif lCount > 2: + raise ValueError('Constant cannot have more than two l/L suffix.') + prefix = 'unsigned ' * uCount + 'long ' * lCount + p[0] = c_ast.Constant( + prefix + 'int', p[1], self._token_coord(p, 1)) + + def p_constant_2(self, p): + """ constant : FLOAT_CONST + | HEX_FLOAT_CONST + """ + if 'x' in p[1].lower(): + t = 'float' + else: + if p[1][-1] in ('f', 'F'): + t = 'float' + elif p[1][-1] in ('l', 'L'): + t = 'long double' + else: + t = 'double' + + p[0] = c_ast.Constant( + t, p[1], self._token_coord(p, 1)) + + def p_constant_3(self, p): + """ constant : CHAR_CONST + | WCHAR_CONST + | U8CHAR_CONST + | U16CHAR_CONST + | U32CHAR_CONST + """ + p[0] = c_ast.Constant( + 'char', p[1], self._token_coord(p, 1)) + + # The "unified" string and wstring literal rules are for supporting + # concatenation of adjacent string literals. + # I.e. "hello " "world" is seen by the C compiler as a single string literal + # with the value "hello world" + # + def p_unified_string_literal(self, p): + """ unified_string_literal : STRING_LITERAL + | unified_string_literal STRING_LITERAL + """ + if len(p) == 2: # single literal + p[0] = c_ast.Constant( + 'string', p[1], self._token_coord(p, 1)) + else: + p[1].value = p[1].value[:-1] + p[2][1:] + p[0] = p[1] + + def p_unified_wstring_literal(self, p): + """ unified_wstring_literal : WSTRING_LITERAL + | U8STRING_LITERAL + | U16STRING_LITERAL + | U32STRING_LITERAL + | unified_wstring_literal WSTRING_LITERAL + | unified_wstring_literal U8STRING_LITERAL + | unified_wstring_literal U16STRING_LITERAL + | unified_wstring_literal U32STRING_LITERAL + """ + if len(p) == 2: # single literal + p[0] = c_ast.Constant( + 'string', p[1], self._token_coord(p, 1)) + else: + p[1].value = p[1].value.rstrip()[:-1] + p[2][2:] + p[0] = p[1] + + def p_brace_open(self, p): + """ brace_open : LBRACE + """ + p[0] = p[1] + p.set_lineno(0, p.lineno(1)) + + def p_brace_close(self, p): + """ brace_close : RBRACE + """ + p[0] = p[1] + p.set_lineno(0, p.lineno(1)) + + def p_empty(self, p): + 'empty : ' + p[0] = None + + def p_error(self, p): + # If error recovery is added here in the future, make sure + # _get_yacc_lookahead_token still works! + # + if p: + self._parse_error( + 'before: %s' % p.value, + self._coord(lineno=p.lineno, + column=self.clex.find_tok_column(p))) + else: + self._parse_error('At end of input', self.clex.filename) diff --git a/templates/skills/file_manager/dependencies/pycparser/lextab.py b/templates/skills/file_manager/dependencies/pycparser/lextab.py new file mode 100644 index 00000000..aeb5c152 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/lextab.py @@ -0,0 +1,10 @@ +# lextab.py. This file automatically created by PLY (version 3.10). Don't edit! +_tabversion = '3.10' +_lextokens = set(('AND', 'ANDEQUAL', 'ARROW', 'AUTO', 'BREAK', 'CASE', 'CHAR', 'CHAR_CONST', 'COLON', 'COMMA', 'CONDOP', 'CONST', 'CONTINUE', 'DEFAULT', 'DIVEQUAL', 'DIVIDE', 'DO', 'DOUBLE', 'ELLIPSIS', 'ELSE', 'ENUM', 'EQ', 'EQUALS', 'EXTERN', 'FLOAT', 'FLOAT_CONST', 'FOR', 'GE', 'GOTO', 'GT', 'HEX_FLOAT_CONST', 'ID', 'IF', 'INLINE', 'INT', 'INT_CONST_BIN', 'INT_CONST_CHAR', 'INT_CONST_DEC', 'INT_CONST_HEX', 'INT_CONST_OCT', 'LAND', 'LBRACE', 'LBRACKET', 'LE', 'LNOT', 'LONG', 'LOR', 'LPAREN', 'LSHIFT', 'LSHIFTEQUAL', 'LT', 'MINUS', 'MINUSEQUAL', 'MINUSMINUS', 'MOD', 'MODEQUAL', 'NE', 'NOT', 'OFFSETOF', 'OR', 'OREQUAL', 'PERIOD', 'PLUS', 'PLUSEQUAL', 'PLUSPLUS', 'PPHASH', 'PPPRAGMA', 'PPPRAGMASTR', 'RBRACE', 'RBRACKET', 'REGISTER', 'RESTRICT', 'RETURN', 'RPAREN', 'RSHIFT', 'RSHIFTEQUAL', 'SEMI', 'SHORT', 'SIGNED', 'SIZEOF', 'STATIC', 'STRING_LITERAL', 'STRUCT', 'SWITCH', 'TIMES', 'TIMESEQUAL', 'TYPEDEF', 'TYPEID', 'U16CHAR_CONST', 'U16STRING_LITERAL', 'U32CHAR_CONST', 'U32STRING_LITERAL', 'U8CHAR_CONST', 'U8STRING_LITERAL', 'UNION', 'UNSIGNED', 'VOID', 'VOLATILE', 'WCHAR_CONST', 'WHILE', 'WSTRING_LITERAL', 'XOR', 'XOREQUAL', '_ALIGNAS', '_ALIGNOF', '_ATOMIC', '_BOOL', '_COMPLEX', '_NORETURN', '_PRAGMA', '_STATIC_ASSERT', '_THREAD_LOCAL', '__INT128')) +_lexreflags = 64 +_lexliterals = '' +_lexstateinfo = {'INITIAL': 'inclusive', 'ppline': 'exclusive', 'pppragma': 'exclusive'} +_lexstatere = {'INITIAL': [('(?P[ \\t]*\\#)|(?P\\n+)|(?P\\{)|(?P\\})|(?P((((([0-9]*\\.[0-9]+)|([0-9]+\\.))([eE][-+]?[0-9]+)?)|([0-9]+([eE][-+]?[0-9]+)))[FfLl]?))|(?P(0[xX]([0-9a-fA-F]+|((([0-9a-fA-F]+)?\\.[0-9a-fA-F]+)|([0-9a-fA-F]+\\.)))([pP][+-]?[0-9]+)[FfLl]?))|(?P0[xX][0-9a-fA-F]+(([uU]ll)|([uU]LL)|(ll[uU]?)|(LL[uU]?)|([uU][lL])|([lL][uU]?)|[uU])?)|(?P0[bB][01]+(([uU]ll)|([uU]LL)|(ll[uU]?)|(LL[uU]?)|([uU][lL])|([lL][uU]?)|[uU])?)|(?P0[0-7]*[89])|(?P0[0-7]*(([uU]ll)|([uU]LL)|(ll[uU]?)|(LL[uU]?)|([uU][lL])|([lL][uU]?)|[uU])?)|(?P(0(([uU]ll)|([uU]LL)|(ll[uU]?)|(LL[uU]?)|([uU][lL])|([lL][uU]?)|[uU])?)|([1-9][0-9]*(([uU]ll)|([uU]LL)|(ll[uU]?)|(LL[uU]?)|([uU][lL])|([lL][uU]?)|[uU])?))|(?P\'([^\'\\\\\\n]|(\\\\(([a-wyzA-Z._~!=&\\^\\-\\\\?\'"]|x(?![0-9a-fA-F]))|(\\d+)(?!\\d)|(x[0-9a-fA-F]+)(?![0-9a-fA-F])))){2,4}\')|(?P\'([^\'\\\\\\n]|(\\\\(([a-wyzA-Z._~!=&\\^\\-\\\\?\'"]|x(?![0-9a-fA-F]))|(\\d+)(?!\\d)|(x[0-9a-fA-F]+)(?![0-9a-fA-F]))))\')|(?PL\'([^\'\\\\\\n]|(\\\\(([a-wyzA-Z._~!=&\\^\\-\\\\?\'"]|x(?![0-9a-fA-F]))|(\\d+)(?!\\d)|(x[0-9a-fA-F]+)(?![0-9a-fA-F]))))\')|(?Pu8\'([^\'\\\\\\n]|(\\\\(([a-wyzA-Z._~!=&\\^\\-\\\\?\'"]|x(?![0-9a-fA-F]))|(\\d+)(?!\\d)|(x[0-9a-fA-F]+)(?![0-9a-fA-F]))))\')|(?Pu\'([^\'\\\\\\n]|(\\\\(([a-wyzA-Z._~!=&\\^\\-\\\\?\'"]|x(?![0-9a-fA-F]))|(\\d+)(?!\\d)|(x[0-9a-fA-F]+)(?![0-9a-fA-F]))))\')|(?PU\'([^\'\\\\\\n]|(\\\\(([a-wyzA-Z._~!=&\\^\\-\\\\?\'"]|x(?![0-9a-fA-F]))|(\\d+)(?!\\d)|(x[0-9a-fA-F]+)(?![0-9a-fA-F]))))\')|(?P(\'([^\'\\\\\\n]|(\\\\(([a-wyzA-Z._~!=&\\^\\-\\\\?\'"]|x(?![0-9a-fA-F]))|(\\d+)(?!\\d)|(x[0-9a-fA-F]+)(?![0-9a-fA-F]))))*\\n)|(\'([^\'\\\\\\n]|(\\\\(([a-wyzA-Z._~!=&\\^\\-\\\\?\'"]|x(?![0-9a-fA-F]))|(\\d+)(?!\\d)|(x[0-9a-fA-F]+)(?![0-9a-fA-F]))))*$))|(?P(\'([^\'\\\\\\n]|(\\\\(([a-wyzA-Z._~!=&\\^\\-\\\\?\'"]|x(?![0-9a-fA-F]))|(\\d+)(?!\\d)|(x[0-9a-fA-F]+)(?![0-9a-fA-F]))))[^\'\n]+\')|(\'\')|(\'([\\\\][^a-zA-Z._~^!=&\\^\\-\\\\?\'"x0-9])[^\'\\n]*\'))|(?PL"([^"\\\\\\n]|(\\\\[0-9a-zA-Z._~!=&\\^\\-\\\\?\'"]))*")|(?Pu8"([^"\\\\\\n]|(\\\\[0-9a-zA-Z._~!=&\\^\\-\\\\?\'"]))*")|(?Pu"([^"\\\\\\n]|(\\\\[0-9a-zA-Z._~!=&\\^\\-\\\\?\'"]))*")|(?PU"([^"\\\\\\n]|(\\\\[0-9a-zA-Z._~!=&\\^\\-\\\\?\'"]))*")|(?P"([^"\\\\\\n]|(\\\\[0-9a-zA-Z._~!=&\\^\\-\\\\?\'"]))*([\\\\][^a-zA-Z._~^!=&\\^\\-\\\\?\'"x0-9])([^"\\\\\\n]|(\\\\[0-9a-zA-Z._~!=&\\^\\-\\\\?\'"]))*")|(?P[a-zA-Z_$][0-9a-zA-Z_$]*)|(?P"([^"\\\\\\n]|(\\\\[0-9a-zA-Z._~!=&\\^\\-\\\\?\'"]))*")|(?P\\.\\.\\.)|(?P\\|\\|)|(?P\\+\\+)|(?P<<=)|(?P\\|=)|(?P\\+=)|(?P>>=)|(?P\\*=)|(?P\\^=)|(?P&=)|(?P->)|(?P\\?)|(?P/=)|(?P==)|(?P>=)|(?P&&)|(?P\\[)|(?P<=)|(?P\\()|(?P<<)|(?P-=)|(?P--)|(?P%=)|(?P!=)|(?P\\|)|(?P\\.)|(?P\\+)|(?P\\])|(?P\\))|(?P>>)|(?P\\*)|(?P\\^)|(?P&)|(?P:)|(?P,)|(?P/)|(?P=)|(?P>)|(?P!)|(?P<)|(?P-)|(?P%)|(?P~)|(?P;)', [None, ('t_PPHASH', 'PPHASH'), ('t_NEWLINE', 'NEWLINE'), ('t_LBRACE', 'LBRACE'), ('t_RBRACE', 'RBRACE'), ('t_FLOAT_CONST', 'FLOAT_CONST'), None, None, None, None, None, None, None, None, None, ('t_HEX_FLOAT_CONST', 'HEX_FLOAT_CONST'), None, None, None, None, None, None, None, ('t_INT_CONST_HEX', 'INT_CONST_HEX'), None, None, None, None, None, None, None, ('t_INT_CONST_BIN', 'INT_CONST_BIN'), None, None, None, None, None, None, None, ('t_BAD_CONST_OCT', 'BAD_CONST_OCT'), ('t_INT_CONST_OCT', 'INT_CONST_OCT'), None, None, None, None, None, None, None, ('t_INT_CONST_DEC', 'INT_CONST_DEC'), None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, ('t_INT_CONST_CHAR', 'INT_CONST_CHAR'), None, None, None, None, None, None, ('t_CHAR_CONST', 'CHAR_CONST'), None, None, None, None, None, None, ('t_WCHAR_CONST', 'WCHAR_CONST'), None, None, None, None, None, None, ('t_U8CHAR_CONST', 'U8CHAR_CONST'), None, None, None, None, None, None, ('t_U16CHAR_CONST', 'U16CHAR_CONST'), None, None, None, None, None, None, ('t_U32CHAR_CONST', 'U32CHAR_CONST'), None, None, None, None, None, None, ('t_UNMATCHED_QUOTE', 'UNMATCHED_QUOTE'), None, None, None, None, None, None, None, None, None, None, None, None, None, None, ('t_BAD_CHAR_CONST', 'BAD_CHAR_CONST'), None, None, None, None, None, None, None, None, None, None, ('t_WSTRING_LITERAL', 'WSTRING_LITERAL'), None, None, ('t_U8STRING_LITERAL', 'U8STRING_LITERAL'), None, None, ('t_U16STRING_LITERAL', 'U16STRING_LITERAL'), None, None, ('t_U32STRING_LITERAL', 'U32STRING_LITERAL'), None, None, ('t_BAD_STRING_LITERAL', 'BAD_STRING_LITERAL'), None, None, None, None, None, ('t_ID', 'ID'), (None, 'STRING_LITERAL'), None, None, (None, 'ELLIPSIS'), (None, 'LOR'), (None, 'PLUSPLUS'), (None, 'LSHIFTEQUAL'), (None, 'OREQUAL'), (None, 'PLUSEQUAL'), (None, 'RSHIFTEQUAL'), (None, 'TIMESEQUAL'), (None, 'XOREQUAL'), (None, 'ANDEQUAL'), (None, 'ARROW'), (None, 'CONDOP'), (None, 'DIVEQUAL'), (None, 'EQ'), (None, 'GE'), (None, 'LAND'), (None, 'LBRACKET'), (None, 'LE'), (None, 'LPAREN'), (None, 'LSHIFT'), (None, 'MINUSEQUAL'), (None, 'MINUSMINUS'), (None, 'MODEQUAL'), (None, 'NE'), (None, 'OR'), (None, 'PERIOD'), (None, 'PLUS'), (None, 'RBRACKET'), (None, 'RPAREN'), (None, 'RSHIFT'), (None, 'TIMES'), (None, 'XOR'), (None, 'AND'), (None, 'COLON'), (None, 'COMMA'), (None, 'DIVIDE'), (None, 'EQUALS'), (None, 'GT'), (None, 'LNOT'), (None, 'LT'), (None, 'MINUS'), (None, 'MOD'), (None, 'NOT'), (None, 'SEMI')])], 'ppline': [('(?P"([^"\\\\\\n]|(\\\\[0-9a-zA-Z._~!=&\\^\\-\\\\?\'"]))*")|(?P(0(([uU]ll)|([uU]LL)|(ll[uU]?)|(LL[uU]?)|([uU][lL])|([lL][uU]?)|[uU])?)|([1-9][0-9]*(([uU]ll)|([uU]LL)|(ll[uU]?)|(LL[uU]?)|([uU][lL])|([lL][uU]?)|[uU])?))|(?P\\n)|(?Pline)', [None, ('t_ppline_FILENAME', 'FILENAME'), None, None, ('t_ppline_LINE_NUMBER', 'LINE_NUMBER'), None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, ('t_ppline_NEWLINE', 'NEWLINE'), ('t_ppline_PPLINE', 'PPLINE')])], 'pppragma': [('(?P\\n)|(?Ppragma)|(?P.+)', [None, ('t_pppragma_NEWLINE', 'NEWLINE'), ('t_pppragma_PPPRAGMA', 'PPPRAGMA'), ('t_pppragma_STR', 'STR')])]} +_lexstateignore = {'INITIAL': ' \t', 'ppline': ' \t', 'pppragma': ' \t'} +_lexstateerrorf = {'INITIAL': 't_error', 'ppline': 't_ppline_error', 'pppragma': 't_pppragma_error'} +_lexstateeoff = {} diff --git a/templates/skills/file_manager/dependencies/pycparser/ply/__init__.py b/templates/skills/file_manager/dependencies/pycparser/ply/__init__.py new file mode 100644 index 00000000..6e53cddc --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/ply/__init__.py @@ -0,0 +1,5 @@ +# PLY package +# Author: David Beazley (dave@dabeaz.com) + +__version__ = '3.9' +__all__ = ['lex','yacc'] diff --git a/templates/skills/file_manager/dependencies/pycparser/ply/cpp.py b/templates/skills/file_manager/dependencies/pycparser/ply/cpp.py new file mode 100644 index 00000000..86273eac --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/ply/cpp.py @@ -0,0 +1,905 @@ +# ----------------------------------------------------------------------------- +# cpp.py +# +# Author: David Beazley (http://www.dabeaz.com) +# Copyright (C) 2017 +# All rights reserved +# +# This module implements an ANSI-C style lexical preprocessor for PLY. +# ----------------------------------------------------------------------------- +import sys + +# Some Python 3 compatibility shims +if sys.version_info.major < 3: + STRING_TYPES = (str, unicode) +else: + STRING_TYPES = str + xrange = range + +# ----------------------------------------------------------------------------- +# Default preprocessor lexer definitions. These tokens are enough to get +# a basic preprocessor working. Other modules may import these if they want +# ----------------------------------------------------------------------------- + +tokens = ( + 'CPP_ID','CPP_INTEGER', 'CPP_FLOAT', 'CPP_STRING', 'CPP_CHAR', 'CPP_WS', 'CPP_COMMENT1', 'CPP_COMMENT2', 'CPP_POUND','CPP_DPOUND' +) + +literals = "+-*/%|&~^<>=!?()[]{}.,;:\\\'\"" + +# Whitespace +def t_CPP_WS(t): + r'\s+' + t.lexer.lineno += t.value.count("\n") + return t + +t_CPP_POUND = r'\#' +t_CPP_DPOUND = r'\#\#' + +# Identifier +t_CPP_ID = r'[A-Za-z_][\w_]*' + +# Integer literal +def CPP_INTEGER(t): + r'(((((0x)|(0X))[0-9a-fA-F]+)|(\d+))([uU][lL]|[lL][uU]|[uU]|[lL])?)' + return t + +t_CPP_INTEGER = CPP_INTEGER + +# Floating literal +t_CPP_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?' + +# String literal +def t_CPP_STRING(t): + r'\"([^\\\n]|(\\(.|\n)))*?\"' + t.lexer.lineno += t.value.count("\n") + return t + +# Character constant 'c' or L'c' +def t_CPP_CHAR(t): + r'(L)?\'([^\\\n]|(\\(.|\n)))*?\'' + t.lexer.lineno += t.value.count("\n") + return t + +# Comment +def t_CPP_COMMENT1(t): + r'(/\*(.|\n)*?\*/)' + ncr = t.value.count("\n") + t.lexer.lineno += ncr + # replace with one space or a number of '\n' + t.type = 'CPP_WS'; t.value = '\n' * ncr if ncr else ' ' + return t + +# Line comment +def t_CPP_COMMENT2(t): + r'(//.*?(\n|$))' + # replace with '/n' + t.type = 'CPP_WS'; t.value = '\n' + return t + +def t_error(t): + t.type = t.value[0] + t.value = t.value[0] + t.lexer.skip(1) + return t + +import re +import copy +import time +import os.path + +# ----------------------------------------------------------------------------- +# trigraph() +# +# Given an input string, this function replaces all trigraph sequences. +# The following mapping is used: +# +# ??= # +# ??/ \ +# ??' ^ +# ??( [ +# ??) ] +# ??! | +# ??< { +# ??> } +# ??- ~ +# ----------------------------------------------------------------------------- + +_trigraph_pat = re.compile(r'''\?\?[=/\'\(\)\!<>\-]''') +_trigraph_rep = { + '=':'#', + '/':'\\', + "'":'^', + '(':'[', + ')':']', + '!':'|', + '<':'{', + '>':'}', + '-':'~' +} + +def trigraph(input): + return _trigraph_pat.sub(lambda g: _trigraph_rep[g.group()[-1]],input) + +# ------------------------------------------------------------------ +# Macro object +# +# This object holds information about preprocessor macros +# +# .name - Macro name (string) +# .value - Macro value (a list of tokens) +# .arglist - List of argument names +# .variadic - Boolean indicating whether or not variadic macro +# .vararg - Name of the variadic parameter +# +# When a macro is created, the macro replacement token sequence is +# pre-scanned and used to create patch lists that are later used +# during macro expansion +# ------------------------------------------------------------------ + +class Macro(object): + def __init__(self,name,value,arglist=None,variadic=False): + self.name = name + self.value = value + self.arglist = arglist + self.variadic = variadic + if variadic: + self.vararg = arglist[-1] + self.source = None + +# ------------------------------------------------------------------ +# Preprocessor object +# +# Object representing a preprocessor. Contains macro definitions, +# include directories, and other information +# ------------------------------------------------------------------ + +class Preprocessor(object): + def __init__(self,lexer=None): + if lexer is None: + lexer = lex.lexer + self.lexer = lexer + self.macros = { } + self.path = [] + self.temp_path = [] + + # Probe the lexer for selected tokens + self.lexprobe() + + tm = time.localtime() + self.define("__DATE__ \"%s\"" % time.strftime("%b %d %Y",tm)) + self.define("__TIME__ \"%s\"" % time.strftime("%H:%M:%S",tm)) + self.parser = None + + # ----------------------------------------------------------------------------- + # tokenize() + # + # Utility function. Given a string of text, tokenize into a list of tokens + # ----------------------------------------------------------------------------- + + def tokenize(self,text): + tokens = [] + self.lexer.input(text) + while True: + tok = self.lexer.token() + if not tok: break + tokens.append(tok) + return tokens + + # --------------------------------------------------------------------- + # error() + # + # Report a preprocessor error/warning of some kind + # ---------------------------------------------------------------------- + + def error(self,file,line,msg): + print("%s:%d %s" % (file,line,msg)) + + # ---------------------------------------------------------------------- + # lexprobe() + # + # This method probes the preprocessor lexer object to discover + # the token types of symbols that are important to the preprocessor. + # If this works right, the preprocessor will simply "work" + # with any suitable lexer regardless of how tokens have been named. + # ---------------------------------------------------------------------- + + def lexprobe(self): + + # Determine the token type for identifiers + self.lexer.input("identifier") + tok = self.lexer.token() + if not tok or tok.value != "identifier": + print("Couldn't determine identifier type") + else: + self.t_ID = tok.type + + # Determine the token type for integers + self.lexer.input("12345") + tok = self.lexer.token() + if not tok or int(tok.value) != 12345: + print("Couldn't determine integer type") + else: + self.t_INTEGER = tok.type + self.t_INTEGER_TYPE = type(tok.value) + + # Determine the token type for strings enclosed in double quotes + self.lexer.input("\"filename\"") + tok = self.lexer.token() + if not tok or tok.value != "\"filename\"": + print("Couldn't determine string type") + else: + self.t_STRING = tok.type + + # Determine the token type for whitespace--if any + self.lexer.input(" ") + tok = self.lexer.token() + if not tok or tok.value != " ": + self.t_SPACE = None + else: + self.t_SPACE = tok.type + + # Determine the token type for newlines + self.lexer.input("\n") + tok = self.lexer.token() + if not tok or tok.value != "\n": + self.t_NEWLINE = None + print("Couldn't determine token for newlines") + else: + self.t_NEWLINE = tok.type + + self.t_WS = (self.t_SPACE, self.t_NEWLINE) + + # Check for other characters used by the preprocessor + chars = [ '<','>','#','##','\\','(',')',',','.'] + for c in chars: + self.lexer.input(c) + tok = self.lexer.token() + if not tok or tok.value != c: + print("Unable to lex '%s' required for preprocessor" % c) + + # ---------------------------------------------------------------------- + # add_path() + # + # Adds a search path to the preprocessor. + # ---------------------------------------------------------------------- + + def add_path(self,path): + self.path.append(path) + + # ---------------------------------------------------------------------- + # group_lines() + # + # Given an input string, this function splits it into lines. Trailing whitespace + # is removed. Any line ending with \ is grouped with the next line. This + # function forms the lowest level of the preprocessor---grouping into text into + # a line-by-line format. + # ---------------------------------------------------------------------- + + def group_lines(self,input): + lex = self.lexer.clone() + lines = [x.rstrip() for x in input.splitlines()] + for i in xrange(len(lines)): + j = i+1 + while lines[i].endswith('\\') and (j < len(lines)): + lines[i] = lines[i][:-1]+lines[j] + lines[j] = "" + j += 1 + + input = "\n".join(lines) + lex.input(input) + lex.lineno = 1 + + current_line = [] + while True: + tok = lex.token() + if not tok: + break + current_line.append(tok) + if tok.type in self.t_WS and '\n' in tok.value: + yield current_line + current_line = [] + + if current_line: + yield current_line + + # ---------------------------------------------------------------------- + # tokenstrip() + # + # Remove leading/trailing whitespace tokens from a token list + # ---------------------------------------------------------------------- + + def tokenstrip(self,tokens): + i = 0 + while i < len(tokens) and tokens[i].type in self.t_WS: + i += 1 + del tokens[:i] + i = len(tokens)-1 + while i >= 0 and tokens[i].type in self.t_WS: + i -= 1 + del tokens[i+1:] + return tokens + + + # ---------------------------------------------------------------------- + # collect_args() + # + # Collects comma separated arguments from a list of tokens. The arguments + # must be enclosed in parenthesis. Returns a tuple (tokencount,args,positions) + # where tokencount is the number of tokens consumed, args is a list of arguments, + # and positions is a list of integers containing the starting index of each + # argument. Each argument is represented by a list of tokens. + # + # When collecting arguments, leading and trailing whitespace is removed + # from each argument. + # + # This function properly handles nested parenthesis and commas---these do not + # define new arguments. + # ---------------------------------------------------------------------- + + def collect_args(self,tokenlist): + args = [] + positions = [] + current_arg = [] + nesting = 1 + tokenlen = len(tokenlist) + + # Search for the opening '('. + i = 0 + while (i < tokenlen) and (tokenlist[i].type in self.t_WS): + i += 1 + + if (i < tokenlen) and (tokenlist[i].value == '('): + positions.append(i+1) + else: + self.error(self.source,tokenlist[0].lineno,"Missing '(' in macro arguments") + return 0, [], [] + + i += 1 + + while i < tokenlen: + t = tokenlist[i] + if t.value == '(': + current_arg.append(t) + nesting += 1 + elif t.value == ')': + nesting -= 1 + if nesting == 0: + if current_arg: + args.append(self.tokenstrip(current_arg)) + positions.append(i) + return i+1,args,positions + current_arg.append(t) + elif t.value == ',' and nesting == 1: + args.append(self.tokenstrip(current_arg)) + positions.append(i+1) + current_arg = [] + else: + current_arg.append(t) + i += 1 + + # Missing end argument + self.error(self.source,tokenlist[-1].lineno,"Missing ')' in macro arguments") + return 0, [],[] + + # ---------------------------------------------------------------------- + # macro_prescan() + # + # Examine the macro value (token sequence) and identify patch points + # This is used to speed up macro expansion later on---we'll know + # right away where to apply patches to the value to form the expansion + # ---------------------------------------------------------------------- + + def macro_prescan(self,macro): + macro.patch = [] # Standard macro arguments + macro.str_patch = [] # String conversion expansion + macro.var_comma_patch = [] # Variadic macro comma patch + i = 0 + while i < len(macro.value): + if macro.value[i].type == self.t_ID and macro.value[i].value in macro.arglist: + argnum = macro.arglist.index(macro.value[i].value) + # Conversion of argument to a string + if i > 0 and macro.value[i-1].value == '#': + macro.value[i] = copy.copy(macro.value[i]) + macro.value[i].type = self.t_STRING + del macro.value[i-1] + macro.str_patch.append((argnum,i-1)) + continue + # Concatenation + elif (i > 0 and macro.value[i-1].value == '##'): + macro.patch.append(('c',argnum,i-1)) + del macro.value[i-1] + continue + elif ((i+1) < len(macro.value) and macro.value[i+1].value == '##'): + macro.patch.append(('c',argnum,i)) + i += 1 + continue + # Standard expansion + else: + macro.patch.append(('e',argnum,i)) + elif macro.value[i].value == '##': + if macro.variadic and (i > 0) and (macro.value[i-1].value == ',') and \ + ((i+1) < len(macro.value)) and (macro.value[i+1].type == self.t_ID) and \ + (macro.value[i+1].value == macro.vararg): + macro.var_comma_patch.append(i-1) + i += 1 + macro.patch.sort(key=lambda x: x[2],reverse=True) + + # ---------------------------------------------------------------------- + # macro_expand_args() + # + # Given a Macro and list of arguments (each a token list), this method + # returns an expanded version of a macro. The return value is a token sequence + # representing the replacement macro tokens + # ---------------------------------------------------------------------- + + def macro_expand_args(self,macro,args): + # Make a copy of the macro token sequence + rep = [copy.copy(_x) for _x in macro.value] + + # Make string expansion patches. These do not alter the length of the replacement sequence + + str_expansion = {} + for argnum, i in macro.str_patch: + if argnum not in str_expansion: + str_expansion[argnum] = ('"%s"' % "".join([x.value for x in args[argnum]])).replace("\\","\\\\") + rep[i] = copy.copy(rep[i]) + rep[i].value = str_expansion[argnum] + + # Make the variadic macro comma patch. If the variadic macro argument is empty, we get rid + comma_patch = False + if macro.variadic and not args[-1]: + for i in macro.var_comma_patch: + rep[i] = None + comma_patch = True + + # Make all other patches. The order of these matters. It is assumed that the patch list + # has been sorted in reverse order of patch location since replacements will cause the + # size of the replacement sequence to expand from the patch point. + + expanded = { } + for ptype, argnum, i in macro.patch: + # Concatenation. Argument is left unexpanded + if ptype == 'c': + rep[i:i+1] = args[argnum] + # Normal expansion. Argument is macro expanded first + elif ptype == 'e': + if argnum not in expanded: + expanded[argnum] = self.expand_macros(args[argnum]) + rep[i:i+1] = expanded[argnum] + + # Get rid of removed comma if necessary + if comma_patch: + rep = [_i for _i in rep if _i] + + return rep + + + # ---------------------------------------------------------------------- + # expand_macros() + # + # Given a list of tokens, this function performs macro expansion. + # The expanded argument is a dictionary that contains macros already + # expanded. This is used to prevent infinite recursion. + # ---------------------------------------------------------------------- + + def expand_macros(self,tokens,expanded=None): + if expanded is None: + expanded = {} + i = 0 + while i < len(tokens): + t = tokens[i] + if t.type == self.t_ID: + if t.value in self.macros and t.value not in expanded: + # Yes, we found a macro match + expanded[t.value] = True + + m = self.macros[t.value] + if not m.arglist: + # A simple macro + ex = self.expand_macros([copy.copy(_x) for _x in m.value],expanded) + for e in ex: + e.lineno = t.lineno + tokens[i:i+1] = ex + i += len(ex) + else: + # A macro with arguments + j = i + 1 + while j < len(tokens) and tokens[j].type in self.t_WS: + j += 1 + if tokens[j].value == '(': + tokcount,args,positions = self.collect_args(tokens[j:]) + if not m.variadic and len(args) != len(m.arglist): + self.error(self.source,t.lineno,"Macro %s requires %d arguments" % (t.value,len(m.arglist))) + i = j + tokcount + elif m.variadic and len(args) < len(m.arglist)-1: + if len(m.arglist) > 2: + self.error(self.source,t.lineno,"Macro %s must have at least %d arguments" % (t.value, len(m.arglist)-1)) + else: + self.error(self.source,t.lineno,"Macro %s must have at least %d argument" % (t.value, len(m.arglist)-1)) + i = j + tokcount + else: + if m.variadic: + if len(args) == len(m.arglist)-1: + args.append([]) + else: + args[len(m.arglist)-1] = tokens[j+positions[len(m.arglist)-1]:j+tokcount-1] + del args[len(m.arglist):] + + # Get macro replacement text + rep = self.macro_expand_args(m,args) + rep = self.expand_macros(rep,expanded) + for r in rep: + r.lineno = t.lineno + tokens[i:j+tokcount] = rep + i += len(rep) + del expanded[t.value] + continue + elif t.value == '__LINE__': + t.type = self.t_INTEGER + t.value = self.t_INTEGER_TYPE(t.lineno) + + i += 1 + return tokens + + # ---------------------------------------------------------------------- + # evalexpr() + # + # Evaluate an expression token sequence for the purposes of evaluating + # integral expressions. + # ---------------------------------------------------------------------- + + def evalexpr(self,tokens): + # tokens = tokenize(line) + # Search for defined macros + i = 0 + while i < len(tokens): + if tokens[i].type == self.t_ID and tokens[i].value == 'defined': + j = i + 1 + needparen = False + result = "0L" + while j < len(tokens): + if tokens[j].type in self.t_WS: + j += 1 + continue + elif tokens[j].type == self.t_ID: + if tokens[j].value in self.macros: + result = "1L" + else: + result = "0L" + if not needparen: break + elif tokens[j].value == '(': + needparen = True + elif tokens[j].value == ')': + break + else: + self.error(self.source,tokens[i].lineno,"Malformed defined()") + j += 1 + tokens[i].type = self.t_INTEGER + tokens[i].value = self.t_INTEGER_TYPE(result) + del tokens[i+1:j+1] + i += 1 + tokens = self.expand_macros(tokens) + for i,t in enumerate(tokens): + if t.type == self.t_ID: + tokens[i] = copy.copy(t) + tokens[i].type = self.t_INTEGER + tokens[i].value = self.t_INTEGER_TYPE("0L") + elif t.type == self.t_INTEGER: + tokens[i] = copy.copy(t) + # Strip off any trailing suffixes + tokens[i].value = str(tokens[i].value) + while tokens[i].value[-1] not in "0123456789abcdefABCDEF": + tokens[i].value = tokens[i].value[:-1] + + expr = "".join([str(x.value) for x in tokens]) + expr = expr.replace("&&"," and ") + expr = expr.replace("||"," or ") + expr = expr.replace("!"," not ") + try: + result = eval(expr) + except Exception: + self.error(self.source,tokens[0].lineno,"Couldn't evaluate expression") + result = 0 + return result + + # ---------------------------------------------------------------------- + # parsegen() + # + # Parse an input string/ + # ---------------------------------------------------------------------- + def parsegen(self,input,source=None): + + # Replace trigraph sequences + t = trigraph(input) + lines = self.group_lines(t) + + if not source: + source = "" + + self.define("__FILE__ \"%s\"" % source) + + self.source = source + chunk = [] + enable = True + iftrigger = False + ifstack = [] + + for x in lines: + for i,tok in enumerate(x): + if tok.type not in self.t_WS: break + if tok.value == '#': + # Preprocessor directive + + # insert necessary whitespace instead of eaten tokens + for tok in x: + if tok.type in self.t_WS and '\n' in tok.value: + chunk.append(tok) + + dirtokens = self.tokenstrip(x[i+1:]) + if dirtokens: + name = dirtokens[0].value + args = self.tokenstrip(dirtokens[1:]) + else: + name = "" + args = [] + + if name == 'define': + if enable: + for tok in self.expand_macros(chunk): + yield tok + chunk = [] + self.define(args) + elif name == 'include': + if enable: + for tok in self.expand_macros(chunk): + yield tok + chunk = [] + oldfile = self.macros['__FILE__'] + for tok in self.include(args): + yield tok + self.macros['__FILE__'] = oldfile + self.source = source + elif name == 'undef': + if enable: + for tok in self.expand_macros(chunk): + yield tok + chunk = [] + self.undef(args) + elif name == 'ifdef': + ifstack.append((enable,iftrigger)) + if enable: + if not args[0].value in self.macros: + enable = False + iftrigger = False + else: + iftrigger = True + elif name == 'ifndef': + ifstack.append((enable,iftrigger)) + if enable: + if args[0].value in self.macros: + enable = False + iftrigger = False + else: + iftrigger = True + elif name == 'if': + ifstack.append((enable,iftrigger)) + if enable: + result = self.evalexpr(args) + if not result: + enable = False + iftrigger = False + else: + iftrigger = True + elif name == 'elif': + if ifstack: + if ifstack[-1][0]: # We only pay attention if outer "if" allows this + if enable: # If already true, we flip enable False + enable = False + elif not iftrigger: # If False, but not triggered yet, we'll check expression + result = self.evalexpr(args) + if result: + enable = True + iftrigger = True + else: + self.error(self.source,dirtokens[0].lineno,"Misplaced #elif") + + elif name == 'else': + if ifstack: + if ifstack[-1][0]: + if enable: + enable = False + elif not iftrigger: + enable = True + iftrigger = True + else: + self.error(self.source,dirtokens[0].lineno,"Misplaced #else") + + elif name == 'endif': + if ifstack: + enable,iftrigger = ifstack.pop() + else: + self.error(self.source,dirtokens[0].lineno,"Misplaced #endif") + else: + # Unknown preprocessor directive + pass + + else: + # Normal text + if enable: + chunk.extend(x) + + for tok in self.expand_macros(chunk): + yield tok + chunk = [] + + # ---------------------------------------------------------------------- + # include() + # + # Implementation of file-inclusion + # ---------------------------------------------------------------------- + + def include(self,tokens): + # Try to extract the filename and then process an include file + if not tokens: + return + if tokens: + if tokens[0].value != '<' and tokens[0].type != self.t_STRING: + tokens = self.expand_macros(tokens) + + if tokens[0].value == '<': + # Include <...> + i = 1 + while i < len(tokens): + if tokens[i].value == '>': + break + i += 1 + else: + print("Malformed #include <...>") + return + filename = "".join([x.value for x in tokens[1:i]]) + path = self.path + [""] + self.temp_path + elif tokens[0].type == self.t_STRING: + filename = tokens[0].value[1:-1] + path = self.temp_path + [""] + self.path + else: + print("Malformed #include statement") + return + for p in path: + iname = os.path.join(p,filename) + try: + data = open(iname,"r").read() + dname = os.path.dirname(iname) + if dname: + self.temp_path.insert(0,dname) + for tok in self.parsegen(data,filename): + yield tok + if dname: + del self.temp_path[0] + break + except IOError: + pass + else: + print("Couldn't find '%s'" % filename) + + # ---------------------------------------------------------------------- + # define() + # + # Define a new macro + # ---------------------------------------------------------------------- + + def define(self,tokens): + if isinstance(tokens,STRING_TYPES): + tokens = self.tokenize(tokens) + + linetok = tokens + try: + name = linetok[0] + if len(linetok) > 1: + mtype = linetok[1] + else: + mtype = None + if not mtype: + m = Macro(name.value,[]) + self.macros[name.value] = m + elif mtype.type in self.t_WS: + # A normal macro + m = Macro(name.value,self.tokenstrip(linetok[2:])) + self.macros[name.value] = m + elif mtype.value == '(': + # A macro with arguments + tokcount, args, positions = self.collect_args(linetok[1:]) + variadic = False + for a in args: + if variadic: + print("No more arguments may follow a variadic argument") + break + astr = "".join([str(_i.value) for _i in a]) + if astr == "...": + variadic = True + a[0].type = self.t_ID + a[0].value = '__VA_ARGS__' + variadic = True + del a[1:] + continue + elif astr[-3:] == "..." and a[0].type == self.t_ID: + variadic = True + del a[1:] + # If, for some reason, "." is part of the identifier, strip off the name for the purposes + # of macro expansion + if a[0].value[-3:] == '...': + a[0].value = a[0].value[:-3] + continue + if len(a) > 1 or a[0].type != self.t_ID: + print("Invalid macro argument") + break + else: + mvalue = self.tokenstrip(linetok[1+tokcount:]) + i = 0 + while i < len(mvalue): + if i+1 < len(mvalue): + if mvalue[i].type in self.t_WS and mvalue[i+1].value == '##': + del mvalue[i] + continue + elif mvalue[i].value == '##' and mvalue[i+1].type in self.t_WS: + del mvalue[i+1] + i += 1 + m = Macro(name.value,mvalue,[x[0].value for x in args],variadic) + self.macro_prescan(m) + self.macros[name.value] = m + else: + print("Bad macro definition") + except LookupError: + print("Bad macro definition") + + # ---------------------------------------------------------------------- + # undef() + # + # Undefine a macro + # ---------------------------------------------------------------------- + + def undef(self,tokens): + id = tokens[0].value + try: + del self.macros[id] + except LookupError: + pass + + # ---------------------------------------------------------------------- + # parse() + # + # Parse input text. + # ---------------------------------------------------------------------- + def parse(self,input,source=None,ignore={}): + self.ignore = ignore + self.parser = self.parsegen(input,source) + + # ---------------------------------------------------------------------- + # token() + # + # Method to return individual tokens + # ---------------------------------------------------------------------- + def token(self): + try: + while True: + tok = next(self.parser) + if tok.type not in self.ignore: return tok + except StopIteration: + self.parser = None + return None + +if __name__ == '__main__': + import ply.lex as lex + lexer = lex.lex() + + # Run a preprocessor + import sys + f = open(sys.argv[1]) + input = f.read() + + p = Preprocessor(lexer) + p.parse(input,sys.argv[1]) + while True: + tok = p.token() + if not tok: break + print(p.source, tok) diff --git a/templates/skills/file_manager/dependencies/pycparser/ply/ctokens.py b/templates/skills/file_manager/dependencies/pycparser/ply/ctokens.py new file mode 100644 index 00000000..f6f6952d --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/ply/ctokens.py @@ -0,0 +1,133 @@ +# ---------------------------------------------------------------------- +# ctokens.py +# +# Token specifications for symbols in ANSI C and C++. This file is +# meant to be used as a library in other tokenizers. +# ---------------------------------------------------------------------- + +# Reserved words + +tokens = [ + # Literals (identifier, integer constant, float constant, string constant, char const) + 'ID', 'TYPEID', 'INTEGER', 'FLOAT', 'STRING', 'CHARACTER', + + # Operators (+,-,*,/,%,|,&,~,^,<<,>>, ||, &&, !, <, <=, >, >=, ==, !=) + 'PLUS', 'MINUS', 'TIMES', 'DIVIDE', 'MODULO', + 'OR', 'AND', 'NOT', 'XOR', 'LSHIFT', 'RSHIFT', + 'LOR', 'LAND', 'LNOT', + 'LT', 'LE', 'GT', 'GE', 'EQ', 'NE', + + # Assignment (=, *=, /=, %=, +=, -=, <<=, >>=, &=, ^=, |=) + 'EQUALS', 'TIMESEQUAL', 'DIVEQUAL', 'MODEQUAL', 'PLUSEQUAL', 'MINUSEQUAL', + 'LSHIFTEQUAL','RSHIFTEQUAL', 'ANDEQUAL', 'XOREQUAL', 'OREQUAL', + + # Increment/decrement (++,--) + 'INCREMENT', 'DECREMENT', + + # Structure dereference (->) + 'ARROW', + + # Ternary operator (?) + 'TERNARY', + + # Delimeters ( ) [ ] { } , . ; : + 'LPAREN', 'RPAREN', + 'LBRACKET', 'RBRACKET', + 'LBRACE', 'RBRACE', + 'COMMA', 'PERIOD', 'SEMI', 'COLON', + + # Ellipsis (...) + 'ELLIPSIS', +] + +# Operators +t_PLUS = r'\+' +t_MINUS = r'-' +t_TIMES = r'\*' +t_DIVIDE = r'/' +t_MODULO = r'%' +t_OR = r'\|' +t_AND = r'&' +t_NOT = r'~' +t_XOR = r'\^' +t_LSHIFT = r'<<' +t_RSHIFT = r'>>' +t_LOR = r'\|\|' +t_LAND = r'&&' +t_LNOT = r'!' +t_LT = r'<' +t_GT = r'>' +t_LE = r'<=' +t_GE = r'>=' +t_EQ = r'==' +t_NE = r'!=' + +# Assignment operators + +t_EQUALS = r'=' +t_TIMESEQUAL = r'\*=' +t_DIVEQUAL = r'/=' +t_MODEQUAL = r'%=' +t_PLUSEQUAL = r'\+=' +t_MINUSEQUAL = r'-=' +t_LSHIFTEQUAL = r'<<=' +t_RSHIFTEQUAL = r'>>=' +t_ANDEQUAL = r'&=' +t_OREQUAL = r'\|=' +t_XOREQUAL = r'\^=' + +# Increment/decrement +t_INCREMENT = r'\+\+' +t_DECREMENT = r'--' + +# -> +t_ARROW = r'->' + +# ? +t_TERNARY = r'\?' + +# Delimeters +t_LPAREN = r'\(' +t_RPAREN = r'\)' +t_LBRACKET = r'\[' +t_RBRACKET = r'\]' +t_LBRACE = r'\{' +t_RBRACE = r'\}' +t_COMMA = r',' +t_PERIOD = r'\.' +t_SEMI = r';' +t_COLON = r':' +t_ELLIPSIS = r'\.\.\.' + +# Identifiers +t_ID = r'[A-Za-z_][A-Za-z0-9_]*' + +# Integer literal +t_INTEGER = r'\d+([uU]|[lL]|[uU][lL]|[lL][uU])?' + +# Floating literal +t_FLOAT = r'((\d+)(\.\d+)(e(\+|-)?(\d+))? | (\d+)e(\+|-)?(\d+))([lL]|[fF])?' + +# String literal +t_STRING = r'\"([^\\\n]|(\\.))*?\"' + +# Character constant 'c' or L'c' +t_CHARACTER = r'(L)?\'([^\\\n]|(\\.))*?\'' + +# Comment (C-Style) +def t_COMMENT(t): + r'/\*(.|\n)*?\*/' + t.lexer.lineno += t.value.count('\n') + return t + +# Comment (C++-Style) +def t_CPPCOMMENT(t): + r'//.*\n' + t.lexer.lineno += 1 + return t + + + + + + diff --git a/templates/skills/file_manager/dependencies/pycparser/ply/lex.py b/templates/skills/file_manager/dependencies/pycparser/ply/lex.py new file mode 100644 index 00000000..dfc51394 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/ply/lex.py @@ -0,0 +1,1099 @@ +# ----------------------------------------------------------------------------- +# ply: lex.py +# +# Copyright (C) 2001-2017 +# David M. Beazley (Dabeaz LLC) +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright notice, +# this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# * Neither the name of the David Beazley or Dabeaz LLC may be used to +# endorse or promote products derived from this software without +# specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +# ----------------------------------------------------------------------------- + +__version__ = '3.10' +__tabversion__ = '3.10' + +import re +import sys +import types +import copy +import os +import inspect + +# This tuple contains known string types +try: + # Python 2.6 + StringTypes = (types.StringType, types.UnicodeType) +except AttributeError: + # Python 3.0 + StringTypes = (str, bytes) + +# This regular expression is used to match valid token names +_is_identifier = re.compile(r'^[a-zA-Z0-9_]+$') + +# Exception thrown when invalid token encountered and no default error +# handler is defined. +class LexError(Exception): + def __init__(self, message, s): + self.args = (message,) + self.text = s + + +# Token class. This class is used to represent the tokens produced. +class LexToken(object): + def __str__(self): + return 'LexToken(%s,%r,%d,%d)' % (self.type, self.value, self.lineno, self.lexpos) + + def __repr__(self): + return str(self) + + +# This object is a stand-in for a logging object created by the +# logging module. + +class PlyLogger(object): + def __init__(self, f): + self.f = f + + def critical(self, msg, *args, **kwargs): + self.f.write((msg % args) + '\n') + + def warning(self, msg, *args, **kwargs): + self.f.write('WARNING: ' + (msg % args) + '\n') + + def error(self, msg, *args, **kwargs): + self.f.write('ERROR: ' + (msg % args) + '\n') + + info = critical + debug = critical + + +# Null logger is used when no output is generated. Does nothing. +class NullLogger(object): + def __getattribute__(self, name): + return self + + def __call__(self, *args, **kwargs): + return self + + +# ----------------------------------------------------------------------------- +# === Lexing Engine === +# +# The following Lexer class implements the lexer runtime. There are only +# a few public methods and attributes: +# +# input() - Store a new string in the lexer +# token() - Get the next token +# clone() - Clone the lexer +# +# lineno - Current line number +# lexpos - Current position in the input string +# ----------------------------------------------------------------------------- + +class Lexer: + def __init__(self): + self.lexre = None # Master regular expression. This is a list of + # tuples (re, findex) where re is a compiled + # regular expression and findex is a list + # mapping regex group numbers to rules + self.lexretext = None # Current regular expression strings + self.lexstatere = {} # Dictionary mapping lexer states to master regexs + self.lexstateretext = {} # Dictionary mapping lexer states to regex strings + self.lexstaterenames = {} # Dictionary mapping lexer states to symbol names + self.lexstate = 'INITIAL' # Current lexer state + self.lexstatestack = [] # Stack of lexer states + self.lexstateinfo = None # State information + self.lexstateignore = {} # Dictionary of ignored characters for each state + self.lexstateerrorf = {} # Dictionary of error functions for each state + self.lexstateeoff = {} # Dictionary of eof functions for each state + self.lexreflags = 0 # Optional re compile flags + self.lexdata = None # Actual input data (as a string) + self.lexpos = 0 # Current position in input text + self.lexlen = 0 # Length of the input text + self.lexerrorf = None # Error rule (if any) + self.lexeoff = None # EOF rule (if any) + self.lextokens = None # List of valid tokens + self.lexignore = '' # Ignored characters + self.lexliterals = '' # Literal characters that can be passed through + self.lexmodule = None # Module + self.lineno = 1 # Current line number + self.lexoptimize = False # Optimized mode + + def clone(self, object=None): + c = copy.copy(self) + + # If the object parameter has been supplied, it means we are attaching the + # lexer to a new object. In this case, we have to rebind all methods in + # the lexstatere and lexstateerrorf tables. + + if object: + newtab = {} + for key, ritem in self.lexstatere.items(): + newre = [] + for cre, findex in ritem: + newfindex = [] + for f in findex: + if not f or not f[0]: + newfindex.append(f) + continue + newfindex.append((getattr(object, f[0].__name__), f[1])) + newre.append((cre, newfindex)) + newtab[key] = newre + c.lexstatere = newtab + c.lexstateerrorf = {} + for key, ef in self.lexstateerrorf.items(): + c.lexstateerrorf[key] = getattr(object, ef.__name__) + c.lexmodule = object + return c + + # ------------------------------------------------------------ + # writetab() - Write lexer information to a table file + # ------------------------------------------------------------ + def writetab(self, lextab, outputdir=''): + if isinstance(lextab, types.ModuleType): + raise IOError("Won't overwrite existing lextab module") + basetabmodule = lextab.split('.')[-1] + filename = os.path.join(outputdir, basetabmodule) + '.py' + with open(filename, 'w') as tf: + tf.write('# %s.py. This file automatically created by PLY (version %s). Don\'t edit!\n' % (basetabmodule, __version__)) + tf.write('_tabversion = %s\n' % repr(__tabversion__)) + tf.write('_lextokens = set(%s)\n' % repr(tuple(sorted(self.lextokens)))) + tf.write('_lexreflags = %s\n' % repr(self.lexreflags)) + tf.write('_lexliterals = %s\n' % repr(self.lexliterals)) + tf.write('_lexstateinfo = %s\n' % repr(self.lexstateinfo)) + + # Rewrite the lexstatere table, replacing function objects with function names + tabre = {} + for statename, lre in self.lexstatere.items(): + titem = [] + for (pat, func), retext, renames in zip(lre, self.lexstateretext[statename], self.lexstaterenames[statename]): + titem.append((retext, _funcs_to_names(func, renames))) + tabre[statename] = titem + + tf.write('_lexstatere = %s\n' % repr(tabre)) + tf.write('_lexstateignore = %s\n' % repr(self.lexstateignore)) + + taberr = {} + for statename, ef in self.lexstateerrorf.items(): + taberr[statename] = ef.__name__ if ef else None + tf.write('_lexstateerrorf = %s\n' % repr(taberr)) + + tabeof = {} + for statename, ef in self.lexstateeoff.items(): + tabeof[statename] = ef.__name__ if ef else None + tf.write('_lexstateeoff = %s\n' % repr(tabeof)) + + # ------------------------------------------------------------ + # readtab() - Read lexer information from a tab file + # ------------------------------------------------------------ + def readtab(self, tabfile, fdict): + if isinstance(tabfile, types.ModuleType): + lextab = tabfile + else: + exec('import %s' % tabfile) + lextab = sys.modules[tabfile] + + if getattr(lextab, '_tabversion', '0.0') != __tabversion__: + raise ImportError('Inconsistent PLY version') + + self.lextokens = lextab._lextokens + self.lexreflags = lextab._lexreflags + self.lexliterals = lextab._lexliterals + self.lextokens_all = self.lextokens | set(self.lexliterals) + self.lexstateinfo = lextab._lexstateinfo + self.lexstateignore = lextab._lexstateignore + self.lexstatere = {} + self.lexstateretext = {} + for statename, lre in lextab._lexstatere.items(): + titem = [] + txtitem = [] + for pat, func_name in lre: + titem.append((re.compile(pat, lextab._lexreflags), _names_to_funcs(func_name, fdict))) + + self.lexstatere[statename] = titem + self.lexstateretext[statename] = txtitem + + self.lexstateerrorf = {} + for statename, ef in lextab._lexstateerrorf.items(): + self.lexstateerrorf[statename] = fdict[ef] + + self.lexstateeoff = {} + for statename, ef in lextab._lexstateeoff.items(): + self.lexstateeoff[statename] = fdict[ef] + + self.begin('INITIAL') + + # ------------------------------------------------------------ + # input() - Push a new string into the lexer + # ------------------------------------------------------------ + def input(self, s): + # Pull off the first character to see if s looks like a string + c = s[:1] + if not isinstance(c, StringTypes): + raise ValueError('Expected a string') + self.lexdata = s + self.lexpos = 0 + self.lexlen = len(s) + + # ------------------------------------------------------------ + # begin() - Changes the lexing state + # ------------------------------------------------------------ + def begin(self, state): + if state not in self.lexstatere: + raise ValueError('Undefined state') + self.lexre = self.lexstatere[state] + self.lexretext = self.lexstateretext[state] + self.lexignore = self.lexstateignore.get(state, '') + self.lexerrorf = self.lexstateerrorf.get(state, None) + self.lexeoff = self.lexstateeoff.get(state, None) + self.lexstate = state + + # ------------------------------------------------------------ + # push_state() - Changes the lexing state and saves old on stack + # ------------------------------------------------------------ + def push_state(self, state): + self.lexstatestack.append(self.lexstate) + self.begin(state) + + # ------------------------------------------------------------ + # pop_state() - Restores the previous state + # ------------------------------------------------------------ + def pop_state(self): + self.begin(self.lexstatestack.pop()) + + # ------------------------------------------------------------ + # current_state() - Returns the current lexing state + # ------------------------------------------------------------ + def current_state(self): + return self.lexstate + + # ------------------------------------------------------------ + # skip() - Skip ahead n characters + # ------------------------------------------------------------ + def skip(self, n): + self.lexpos += n + + # ------------------------------------------------------------ + # opttoken() - Return the next token from the Lexer + # + # Note: This function has been carefully implemented to be as fast + # as possible. Don't make changes unless you really know what + # you are doing + # ------------------------------------------------------------ + def token(self): + # Make local copies of frequently referenced attributes + lexpos = self.lexpos + lexlen = self.lexlen + lexignore = self.lexignore + lexdata = self.lexdata + + while lexpos < lexlen: + # This code provides some short-circuit code for whitespace, tabs, and other ignored characters + if lexdata[lexpos] in lexignore: + lexpos += 1 + continue + + # Look for a regular expression match + for lexre, lexindexfunc in self.lexre: + m = lexre.match(lexdata, lexpos) + if not m: + continue + + # Create a token for return + tok = LexToken() + tok.value = m.group() + tok.lineno = self.lineno + tok.lexpos = lexpos + + i = m.lastindex + func, tok.type = lexindexfunc[i] + + if not func: + # If no token type was set, it's an ignored token + if tok.type: + self.lexpos = m.end() + return tok + else: + lexpos = m.end() + break + + lexpos = m.end() + + # If token is processed by a function, call it + + tok.lexer = self # Set additional attributes useful in token rules + self.lexmatch = m + self.lexpos = lexpos + + newtok = func(tok) + + # Every function must return a token, if nothing, we just move to next token + if not newtok: + lexpos = self.lexpos # This is here in case user has updated lexpos. + lexignore = self.lexignore # This is here in case there was a state change + break + + # Verify type of the token. If not in the token map, raise an error + if not self.lexoptimize: + if newtok.type not in self.lextokens_all: + raise LexError("%s:%d: Rule '%s' returned an unknown token type '%s'" % ( + func.__code__.co_filename, func.__code__.co_firstlineno, + func.__name__, newtok.type), lexdata[lexpos:]) + + return newtok + else: + # No match, see if in literals + if lexdata[lexpos] in self.lexliterals: + tok = LexToken() + tok.value = lexdata[lexpos] + tok.lineno = self.lineno + tok.type = tok.value + tok.lexpos = lexpos + self.lexpos = lexpos + 1 + return tok + + # No match. Call t_error() if defined. + if self.lexerrorf: + tok = LexToken() + tok.value = self.lexdata[lexpos:] + tok.lineno = self.lineno + tok.type = 'error' + tok.lexer = self + tok.lexpos = lexpos + self.lexpos = lexpos + newtok = self.lexerrorf(tok) + if lexpos == self.lexpos: + # Error method didn't change text position at all. This is an error. + raise LexError("Scanning error. Illegal character '%s'" % (lexdata[lexpos]), lexdata[lexpos:]) + lexpos = self.lexpos + if not newtok: + continue + return newtok + + self.lexpos = lexpos + raise LexError("Illegal character '%s' at index %d" % (lexdata[lexpos], lexpos), lexdata[lexpos:]) + + if self.lexeoff: + tok = LexToken() + tok.type = 'eof' + tok.value = '' + tok.lineno = self.lineno + tok.lexpos = lexpos + tok.lexer = self + self.lexpos = lexpos + newtok = self.lexeoff(tok) + return newtok + + self.lexpos = lexpos + 1 + if self.lexdata is None: + raise RuntimeError('No input string given with input()') + return None + + # Iterator interface + def __iter__(self): + return self + + def next(self): + t = self.token() + if t is None: + raise StopIteration + return t + + __next__ = next + +# ----------------------------------------------------------------------------- +# ==== Lex Builder === +# +# The functions and classes below are used to collect lexing information +# and build a Lexer object from it. +# ----------------------------------------------------------------------------- + +# ----------------------------------------------------------------------------- +# _get_regex(func) +# +# Returns the regular expression assigned to a function either as a doc string +# or as a .regex attribute attached by the @TOKEN decorator. +# ----------------------------------------------------------------------------- +def _get_regex(func): + return getattr(func, 'regex', func.__doc__) + +# ----------------------------------------------------------------------------- +# get_caller_module_dict() +# +# This function returns a dictionary containing all of the symbols defined within +# a caller further down the call stack. This is used to get the environment +# associated with the yacc() call if none was provided. +# ----------------------------------------------------------------------------- +def get_caller_module_dict(levels): + f = sys._getframe(levels) + ldict = f.f_globals.copy() + if f.f_globals != f.f_locals: + ldict.update(f.f_locals) + return ldict + +# ----------------------------------------------------------------------------- +# _funcs_to_names() +# +# Given a list of regular expression functions, this converts it to a list +# suitable for output to a table file +# ----------------------------------------------------------------------------- +def _funcs_to_names(funclist, namelist): + result = [] + for f, name in zip(funclist, namelist): + if f and f[0]: + result.append((name, f[1])) + else: + result.append(f) + return result + +# ----------------------------------------------------------------------------- +# _names_to_funcs() +# +# Given a list of regular expression function names, this converts it back to +# functions. +# ----------------------------------------------------------------------------- +def _names_to_funcs(namelist, fdict): + result = [] + for n in namelist: + if n and n[0]: + result.append((fdict[n[0]], n[1])) + else: + result.append(n) + return result + +# ----------------------------------------------------------------------------- +# _form_master_re() +# +# This function takes a list of all of the regex components and attempts to +# form the master regular expression. Given limitations in the Python re +# module, it may be necessary to break the master regex into separate expressions. +# ----------------------------------------------------------------------------- +def _form_master_re(relist, reflags, ldict, toknames): + if not relist: + return [] + regex = '|'.join(relist) + try: + lexre = re.compile(regex, reflags) + + # Build the index to function map for the matching engine + lexindexfunc = [None] * (max(lexre.groupindex.values()) + 1) + lexindexnames = lexindexfunc[:] + + for f, i in lexre.groupindex.items(): + handle = ldict.get(f, None) + if type(handle) in (types.FunctionType, types.MethodType): + lexindexfunc[i] = (handle, toknames[f]) + lexindexnames[i] = f + elif handle is not None: + lexindexnames[i] = f + if f.find('ignore_') > 0: + lexindexfunc[i] = (None, None) + else: + lexindexfunc[i] = (None, toknames[f]) + + return [(lexre, lexindexfunc)], [regex], [lexindexnames] + except Exception: + m = int(len(relist)/2) + if m == 0: + m = 1 + llist, lre, lnames = _form_master_re(relist[:m], reflags, ldict, toknames) + rlist, rre, rnames = _form_master_re(relist[m:], reflags, ldict, toknames) + return (llist+rlist), (lre+rre), (lnames+rnames) + +# ----------------------------------------------------------------------------- +# def _statetoken(s,names) +# +# Given a declaration name s of the form "t_" and a dictionary whose keys are +# state names, this function returns a tuple (states,tokenname) where states +# is a tuple of state names and tokenname is the name of the token. For example, +# calling this with s = "t_foo_bar_SPAM" might return (('foo','bar'),'SPAM') +# ----------------------------------------------------------------------------- +def _statetoken(s, names): + nonstate = 1 + parts = s.split('_') + for i, part in enumerate(parts[1:], 1): + if part not in names and part != 'ANY': + break + + if i > 1: + states = tuple(parts[1:i]) + else: + states = ('INITIAL',) + + if 'ANY' in states: + states = tuple(names) + + tokenname = '_'.join(parts[i:]) + return (states, tokenname) + + +# ----------------------------------------------------------------------------- +# LexerReflect() +# +# This class represents information needed to build a lexer as extracted from a +# user's input file. +# ----------------------------------------------------------------------------- +class LexerReflect(object): + def __init__(self, ldict, log=None, reflags=0): + self.ldict = ldict + self.error_func = None + self.tokens = [] + self.reflags = reflags + self.stateinfo = {'INITIAL': 'inclusive'} + self.modules = set() + self.error = False + self.log = PlyLogger(sys.stderr) if log is None else log + + # Get all of the basic information + def get_all(self): + self.get_tokens() + self.get_literals() + self.get_states() + self.get_rules() + + # Validate all of the information + def validate_all(self): + self.validate_tokens() + self.validate_literals() + self.validate_rules() + return self.error + + # Get the tokens map + def get_tokens(self): + tokens = self.ldict.get('tokens', None) + if not tokens: + self.log.error('No token list is defined') + self.error = True + return + + if not isinstance(tokens, (list, tuple)): + self.log.error('tokens must be a list or tuple') + self.error = True + return + + if not tokens: + self.log.error('tokens is empty') + self.error = True + return + + self.tokens = tokens + + # Validate the tokens + def validate_tokens(self): + terminals = {} + for n in self.tokens: + if not _is_identifier.match(n): + self.log.error("Bad token name '%s'", n) + self.error = True + if n in terminals: + self.log.warning("Token '%s' multiply defined", n) + terminals[n] = 1 + + # Get the literals specifier + def get_literals(self): + self.literals = self.ldict.get('literals', '') + if not self.literals: + self.literals = '' + + # Validate literals + def validate_literals(self): + try: + for c in self.literals: + if not isinstance(c, StringTypes) or len(c) > 1: + self.log.error('Invalid literal %s. Must be a single character', repr(c)) + self.error = True + + except TypeError: + self.log.error('Invalid literals specification. literals must be a sequence of characters') + self.error = True + + def get_states(self): + self.states = self.ldict.get('states', None) + # Build statemap + if self.states: + if not isinstance(self.states, (tuple, list)): + self.log.error('states must be defined as a tuple or list') + self.error = True + else: + for s in self.states: + if not isinstance(s, tuple) or len(s) != 2: + self.log.error("Invalid state specifier %s. Must be a tuple (statename,'exclusive|inclusive')", repr(s)) + self.error = True + continue + name, statetype = s + if not isinstance(name, StringTypes): + self.log.error('State name %s must be a string', repr(name)) + self.error = True + continue + if not (statetype == 'inclusive' or statetype == 'exclusive'): + self.log.error("State type for state %s must be 'inclusive' or 'exclusive'", name) + self.error = True + continue + if name in self.stateinfo: + self.log.error("State '%s' already defined", name) + self.error = True + continue + self.stateinfo[name] = statetype + + # Get all of the symbols with a t_ prefix and sort them into various + # categories (functions, strings, error functions, and ignore characters) + + def get_rules(self): + tsymbols = [f for f in self.ldict if f[:2] == 't_'] + + # Now build up a list of functions and a list of strings + self.toknames = {} # Mapping of symbols to token names + self.funcsym = {} # Symbols defined as functions + self.strsym = {} # Symbols defined as strings + self.ignore = {} # Ignore strings by state + self.errorf = {} # Error functions by state + self.eoff = {} # EOF functions by state + + for s in self.stateinfo: + self.funcsym[s] = [] + self.strsym[s] = [] + + if len(tsymbols) == 0: + self.log.error('No rules of the form t_rulename are defined') + self.error = True + return + + for f in tsymbols: + t = self.ldict[f] + states, tokname = _statetoken(f, self.stateinfo) + self.toknames[f] = tokname + + if hasattr(t, '__call__'): + if tokname == 'error': + for s in states: + self.errorf[s] = t + elif tokname == 'eof': + for s in states: + self.eoff[s] = t + elif tokname == 'ignore': + line = t.__code__.co_firstlineno + file = t.__code__.co_filename + self.log.error("%s:%d: Rule '%s' must be defined as a string", file, line, t.__name__) + self.error = True + else: + for s in states: + self.funcsym[s].append((f, t)) + elif isinstance(t, StringTypes): + if tokname == 'ignore': + for s in states: + self.ignore[s] = t + if '\\' in t: + self.log.warning("%s contains a literal backslash '\\'", f) + + elif tokname == 'error': + self.log.error("Rule '%s' must be defined as a function", f) + self.error = True + else: + for s in states: + self.strsym[s].append((f, t)) + else: + self.log.error('%s not defined as a function or string', f) + self.error = True + + # Sort the functions by line number + for f in self.funcsym.values(): + f.sort(key=lambda x: x[1].__code__.co_firstlineno) + + # Sort the strings by regular expression length + for s in self.strsym.values(): + s.sort(key=lambda x: len(x[1]), reverse=True) + + # Validate all of the t_rules collected + def validate_rules(self): + for state in self.stateinfo: + # Validate all rules defined by functions + + for fname, f in self.funcsym[state]: + line = f.__code__.co_firstlineno + file = f.__code__.co_filename + module = inspect.getmodule(f) + self.modules.add(module) + + tokname = self.toknames[fname] + if isinstance(f, types.MethodType): + reqargs = 2 + else: + reqargs = 1 + nargs = f.__code__.co_argcount + if nargs > reqargs: + self.log.error("%s:%d: Rule '%s' has too many arguments", file, line, f.__name__) + self.error = True + continue + + if nargs < reqargs: + self.log.error("%s:%d: Rule '%s' requires an argument", file, line, f.__name__) + self.error = True + continue + + if not _get_regex(f): + self.log.error("%s:%d: No regular expression defined for rule '%s'", file, line, f.__name__) + self.error = True + continue + + try: + c = re.compile('(?P<%s>%s)' % (fname, _get_regex(f)), self.reflags) + if c.match(''): + self.log.error("%s:%d: Regular expression for rule '%s' matches empty string", file, line, f.__name__) + self.error = True + except re.error as e: + self.log.error("%s:%d: Invalid regular expression for rule '%s'. %s", file, line, f.__name__, e) + if '#' in _get_regex(f): + self.log.error("%s:%d. Make sure '#' in rule '%s' is escaped with '\\#'", file, line, f.__name__) + self.error = True + + # Validate all rules defined by strings + for name, r in self.strsym[state]: + tokname = self.toknames[name] + if tokname == 'error': + self.log.error("Rule '%s' must be defined as a function", name) + self.error = True + continue + + if tokname not in self.tokens and tokname.find('ignore_') < 0: + self.log.error("Rule '%s' defined for an unspecified token %s", name, tokname) + self.error = True + continue + + try: + c = re.compile('(?P<%s>%s)' % (name, r), self.reflags) + if (c.match('')): + self.log.error("Regular expression for rule '%s' matches empty string", name) + self.error = True + except re.error as e: + self.log.error("Invalid regular expression for rule '%s'. %s", name, e) + if '#' in r: + self.log.error("Make sure '#' in rule '%s' is escaped with '\\#'", name) + self.error = True + + if not self.funcsym[state] and not self.strsym[state]: + self.log.error("No rules defined for state '%s'", state) + self.error = True + + # Validate the error function + efunc = self.errorf.get(state, None) + if efunc: + f = efunc + line = f.__code__.co_firstlineno + file = f.__code__.co_filename + module = inspect.getmodule(f) + self.modules.add(module) + + if isinstance(f, types.MethodType): + reqargs = 2 + else: + reqargs = 1 + nargs = f.__code__.co_argcount + if nargs > reqargs: + self.log.error("%s:%d: Rule '%s' has too many arguments", file, line, f.__name__) + self.error = True + + if nargs < reqargs: + self.log.error("%s:%d: Rule '%s' requires an argument", file, line, f.__name__) + self.error = True + + for module in self.modules: + self.validate_module(module) + + # ----------------------------------------------------------------------------- + # validate_module() + # + # This checks to see if there are duplicated t_rulename() functions or strings + # in the parser input file. This is done using a simple regular expression + # match on each line in the source code of the given module. + # ----------------------------------------------------------------------------- + + def validate_module(self, module): + try: + lines, linen = inspect.getsourcelines(module) + except IOError: + return + + fre = re.compile(r'\s*def\s+(t_[a-zA-Z_0-9]*)\(') + sre = re.compile(r'\s*(t_[a-zA-Z_0-9]*)\s*=') + + counthash = {} + linen += 1 + for line in lines: + m = fre.match(line) + if not m: + m = sre.match(line) + if m: + name = m.group(1) + prev = counthash.get(name) + if not prev: + counthash[name] = linen + else: + filename = inspect.getsourcefile(module) + self.log.error('%s:%d: Rule %s redefined. Previously defined on line %d', filename, linen, name, prev) + self.error = True + linen += 1 + +# ----------------------------------------------------------------------------- +# lex(module) +# +# Build all of the regular expression rules from definitions in the supplied module +# ----------------------------------------------------------------------------- +def lex(module=None, object=None, debug=False, optimize=False, lextab='lextab', + reflags=int(re.VERBOSE), nowarn=False, outputdir=None, debuglog=None, errorlog=None): + + if lextab is None: + lextab = 'lextab' + + global lexer + + ldict = None + stateinfo = {'INITIAL': 'inclusive'} + lexobj = Lexer() + lexobj.lexoptimize = optimize + global token, input + + if errorlog is None: + errorlog = PlyLogger(sys.stderr) + + if debug: + if debuglog is None: + debuglog = PlyLogger(sys.stderr) + + # Get the module dictionary used for the lexer + if object: + module = object + + # Get the module dictionary used for the parser + if module: + _items = [(k, getattr(module, k)) for k in dir(module)] + ldict = dict(_items) + # If no __file__ attribute is available, try to obtain it from the __module__ instead + if '__file__' not in ldict: + ldict['__file__'] = sys.modules[ldict['__module__']].__file__ + else: + ldict = get_caller_module_dict(2) + + # Determine if the module is package of a package or not. + # If so, fix the tabmodule setting so that tables load correctly + pkg = ldict.get('__package__') + if pkg and isinstance(lextab, str): + if '.' not in lextab: + lextab = pkg + '.' + lextab + + # Collect parser information from the dictionary + linfo = LexerReflect(ldict, log=errorlog, reflags=reflags) + linfo.get_all() + if not optimize: + if linfo.validate_all(): + raise SyntaxError("Can't build lexer") + + if optimize and lextab: + try: + lexobj.readtab(lextab, ldict) + token = lexobj.token + input = lexobj.input + lexer = lexobj + return lexobj + + except ImportError: + pass + + # Dump some basic debugging information + if debug: + debuglog.info('lex: tokens = %r', linfo.tokens) + debuglog.info('lex: literals = %r', linfo.literals) + debuglog.info('lex: states = %r', linfo.stateinfo) + + # Build a dictionary of valid token names + lexobj.lextokens = set() + for n in linfo.tokens: + lexobj.lextokens.add(n) + + # Get literals specification + if isinstance(linfo.literals, (list, tuple)): + lexobj.lexliterals = type(linfo.literals[0])().join(linfo.literals) + else: + lexobj.lexliterals = linfo.literals + + lexobj.lextokens_all = lexobj.lextokens | set(lexobj.lexliterals) + + # Get the stateinfo dictionary + stateinfo = linfo.stateinfo + + regexs = {} + # Build the master regular expressions + for state in stateinfo: + regex_list = [] + + # Add rules defined by functions first + for fname, f in linfo.funcsym[state]: + line = f.__code__.co_firstlineno + file = f.__code__.co_filename + regex_list.append('(?P<%s>%s)' % (fname, _get_regex(f))) + if debug: + debuglog.info("lex: Adding rule %s -> '%s' (state '%s')", fname, _get_regex(f), state) + + # Now add all of the simple rules + for name, r in linfo.strsym[state]: + regex_list.append('(?P<%s>%s)' % (name, r)) + if debug: + debuglog.info("lex: Adding rule %s -> '%s' (state '%s')", name, r, state) + + regexs[state] = regex_list + + # Build the master regular expressions + + if debug: + debuglog.info('lex: ==== MASTER REGEXS FOLLOW ====') + + for state in regexs: + lexre, re_text, re_names = _form_master_re(regexs[state], reflags, ldict, linfo.toknames) + lexobj.lexstatere[state] = lexre + lexobj.lexstateretext[state] = re_text + lexobj.lexstaterenames[state] = re_names + if debug: + for i, text in enumerate(re_text): + debuglog.info("lex: state '%s' : regex[%d] = '%s'", state, i, text) + + # For inclusive states, we need to add the regular expressions from the INITIAL state + for state, stype in stateinfo.items(): + if state != 'INITIAL' and stype == 'inclusive': + lexobj.lexstatere[state].extend(lexobj.lexstatere['INITIAL']) + lexobj.lexstateretext[state].extend(lexobj.lexstateretext['INITIAL']) + lexobj.lexstaterenames[state].extend(lexobj.lexstaterenames['INITIAL']) + + lexobj.lexstateinfo = stateinfo + lexobj.lexre = lexobj.lexstatere['INITIAL'] + lexobj.lexretext = lexobj.lexstateretext['INITIAL'] + lexobj.lexreflags = reflags + + # Set up ignore variables + lexobj.lexstateignore = linfo.ignore + lexobj.lexignore = lexobj.lexstateignore.get('INITIAL', '') + + # Set up error functions + lexobj.lexstateerrorf = linfo.errorf + lexobj.lexerrorf = linfo.errorf.get('INITIAL', None) + if not lexobj.lexerrorf: + errorlog.warning('No t_error rule is defined') + + # Set up eof functions + lexobj.lexstateeoff = linfo.eoff + lexobj.lexeoff = linfo.eoff.get('INITIAL', None) + + # Check state information for ignore and error rules + for s, stype in stateinfo.items(): + if stype == 'exclusive': + if s not in linfo.errorf: + errorlog.warning("No error rule is defined for exclusive state '%s'", s) + if s not in linfo.ignore and lexobj.lexignore: + errorlog.warning("No ignore rule is defined for exclusive state '%s'", s) + elif stype == 'inclusive': + if s not in linfo.errorf: + linfo.errorf[s] = linfo.errorf.get('INITIAL', None) + if s not in linfo.ignore: + linfo.ignore[s] = linfo.ignore.get('INITIAL', '') + + # Create global versions of the token() and input() functions + token = lexobj.token + input = lexobj.input + lexer = lexobj + + # If in optimize mode, we write the lextab + if lextab and optimize: + if outputdir is None: + # If no output directory is set, the location of the output files + # is determined according to the following rules: + # - If lextab specifies a package, files go into that package directory + # - Otherwise, files go in the same directory as the specifying module + if isinstance(lextab, types.ModuleType): + srcfile = lextab.__file__ + else: + if '.' not in lextab: + srcfile = ldict['__file__'] + else: + parts = lextab.split('.') + pkgname = '.'.join(parts[:-1]) + exec('import %s' % pkgname) + srcfile = getattr(sys.modules[pkgname], '__file__', '') + outputdir = os.path.dirname(srcfile) + try: + lexobj.writetab(lextab, outputdir) + except IOError as e: + errorlog.warning("Couldn't write lextab module %r. %s" % (lextab, e)) + + return lexobj + +# ----------------------------------------------------------------------------- +# runmain() +# +# This runs the lexer as a main program +# ----------------------------------------------------------------------------- + +def runmain(lexer=None, data=None): + if not data: + try: + filename = sys.argv[1] + f = open(filename) + data = f.read() + f.close() + except IndexError: + sys.stdout.write('Reading from standard input (type EOF to end):\n') + data = sys.stdin.read() + + if lexer: + _input = lexer.input + else: + _input = input + _input(data) + if lexer: + _token = lexer.token + else: + _token = token + + while True: + tok = _token() + if not tok: + break + sys.stdout.write('(%s,%r,%d,%d)\n' % (tok.type, tok.value, tok.lineno, tok.lexpos)) + +# ----------------------------------------------------------------------------- +# @TOKEN(regex) +# +# This decorator function can be used to set the regex expression on a function +# when its docstring might need to be set in an alternative way +# ----------------------------------------------------------------------------- + +def TOKEN(r): + def set_regex(f): + if hasattr(r, '__call__'): + f.regex = _get_regex(r) + else: + f.regex = r + return f + return set_regex + +# Alternative spelling of the TOKEN decorator +Token = TOKEN diff --git a/templates/skills/file_manager/dependencies/pycparser/ply/yacc.py b/templates/skills/file_manager/dependencies/pycparser/ply/yacc.py new file mode 100644 index 00000000..20b4f286 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/ply/yacc.py @@ -0,0 +1,3494 @@ +# ----------------------------------------------------------------------------- +# ply: yacc.py +# +# Copyright (C) 2001-2017 +# David M. Beazley (Dabeaz LLC) +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions are +# met: +# +# * Redistributions of source code must retain the above copyright notice, +# this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright notice, +# this list of conditions and the following disclaimer in the documentation +# and/or other materials provided with the distribution. +# * Neither the name of the David Beazley or Dabeaz LLC may be used to +# endorse or promote products derived from this software without +# specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +# ----------------------------------------------------------------------------- +# +# This implements an LR parser that is constructed from grammar rules defined +# as Python functions. The grammer is specified by supplying the BNF inside +# Python documentation strings. The inspiration for this technique was borrowed +# from John Aycock's Spark parsing system. PLY might be viewed as cross between +# Spark and the GNU bison utility. +# +# The current implementation is only somewhat object-oriented. The +# LR parser itself is defined in terms of an object (which allows multiple +# parsers to co-exist). However, most of the variables used during table +# construction are defined in terms of global variables. Users shouldn't +# notice unless they are trying to define multiple parsers at the same +# time using threads (in which case they should have their head examined). +# +# This implementation supports both SLR and LALR(1) parsing. LALR(1) +# support was originally implemented by Elias Ioup (ezioup@alumni.uchicago.edu), +# using the algorithm found in Aho, Sethi, and Ullman "Compilers: Principles, +# Techniques, and Tools" (The Dragon Book). LALR(1) has since been replaced +# by the more efficient DeRemer and Pennello algorithm. +# +# :::::::: WARNING ::::::: +# +# Construction of LR parsing tables is fairly complicated and expensive. +# To make this module run fast, a *LOT* of work has been put into +# optimization---often at the expensive of readability and what might +# consider to be good Python "coding style." Modify the code at your +# own risk! +# ---------------------------------------------------------------------------- + +import re +import types +import sys +import os.path +import inspect +import base64 +import warnings + +__version__ = '3.10' +__tabversion__ = '3.10' + +#----------------------------------------------------------------------------- +# === User configurable parameters === +# +# Change these to modify the default behavior of yacc (if you wish) +#----------------------------------------------------------------------------- + +yaccdebug = True # Debugging mode. If set, yacc generates a + # a 'parser.out' file in the current directory + +debug_file = 'parser.out' # Default name of the debugging file +tab_module = 'parsetab' # Default name of the table module +default_lr = 'LALR' # Default LR table generation method + +error_count = 3 # Number of symbols that must be shifted to leave recovery mode + +yaccdevel = False # Set to True if developing yacc. This turns off optimized + # implementations of certain functions. + +resultlimit = 40 # Size limit of results when running in debug mode. + +pickle_protocol = 0 # Protocol to use when writing pickle files + +# String type-checking compatibility +if sys.version_info[0] < 3: + string_types = basestring +else: + string_types = str + +MAXINT = sys.maxsize + +# This object is a stand-in for a logging object created by the +# logging module. PLY will use this by default to create things +# such as the parser.out file. If a user wants more detailed +# information, they can create their own logging object and pass +# it into PLY. + +class PlyLogger(object): + def __init__(self, f): + self.f = f + + def debug(self, msg, *args, **kwargs): + self.f.write((msg % args) + '\n') + + info = debug + + def warning(self, msg, *args, **kwargs): + self.f.write('WARNING: ' + (msg % args) + '\n') + + def error(self, msg, *args, **kwargs): + self.f.write('ERROR: ' + (msg % args) + '\n') + + critical = debug + +# Null logger is used when no output is generated. Does nothing. +class NullLogger(object): + def __getattribute__(self, name): + return self + + def __call__(self, *args, **kwargs): + return self + +# Exception raised for yacc-related errors +class YaccError(Exception): + pass + +# Format the result message that the parser produces when running in debug mode. +def format_result(r): + repr_str = repr(r) + if '\n' in repr_str: + repr_str = repr(repr_str) + if len(repr_str) > resultlimit: + repr_str = repr_str[:resultlimit] + ' ...' + result = '<%s @ 0x%x> (%s)' % (type(r).__name__, id(r), repr_str) + return result + +# Format stack entries when the parser is running in debug mode +def format_stack_entry(r): + repr_str = repr(r) + if '\n' in repr_str: + repr_str = repr(repr_str) + if len(repr_str) < 16: + return repr_str + else: + return '<%s @ 0x%x>' % (type(r).__name__, id(r)) + +# Panic mode error recovery support. This feature is being reworked--much of the +# code here is to offer a deprecation/backwards compatible transition + +_errok = None +_token = None +_restart = None +_warnmsg = '''PLY: Don't use global functions errok(), token(), and restart() in p_error(). +Instead, invoke the methods on the associated parser instance: + + def p_error(p): + ... + # Use parser.errok(), parser.token(), parser.restart() + ... + + parser = yacc.yacc() +''' + +def errok(): + warnings.warn(_warnmsg) + return _errok() + +def restart(): + warnings.warn(_warnmsg) + return _restart() + +def token(): + warnings.warn(_warnmsg) + return _token() + +# Utility function to call the p_error() function with some deprecation hacks +def call_errorfunc(errorfunc, token, parser): + global _errok, _token, _restart + _errok = parser.errok + _token = parser.token + _restart = parser.restart + r = errorfunc(token) + try: + del _errok, _token, _restart + except NameError: + pass + return r + +#----------------------------------------------------------------------------- +# === LR Parsing Engine === +# +# The following classes are used for the LR parser itself. These are not +# used during table construction and are independent of the actual LR +# table generation algorithm +#----------------------------------------------------------------------------- + +# This class is used to hold non-terminal grammar symbols during parsing. +# It normally has the following attributes set: +# .type = Grammar symbol type +# .value = Symbol value +# .lineno = Starting line number +# .endlineno = Ending line number (optional, set automatically) +# .lexpos = Starting lex position +# .endlexpos = Ending lex position (optional, set automatically) + +class YaccSymbol: + def __str__(self): + return self.type + + def __repr__(self): + return str(self) + +# This class is a wrapper around the objects actually passed to each +# grammar rule. Index lookup and assignment actually assign the +# .value attribute of the underlying YaccSymbol object. +# The lineno() method returns the line number of a given +# item (or 0 if not defined). The linespan() method returns +# a tuple of (startline,endline) representing the range of lines +# for a symbol. The lexspan() method returns a tuple (lexpos,endlexpos) +# representing the range of positional information for a symbol. + +class YaccProduction: + def __init__(self, s, stack=None): + self.slice = s + self.stack = stack + self.lexer = None + self.parser = None + + def __getitem__(self, n): + if isinstance(n, slice): + return [s.value for s in self.slice[n]] + elif n >= 0: + return self.slice[n].value + else: + return self.stack[n].value + + def __setitem__(self, n, v): + self.slice[n].value = v + + def __getslice__(self, i, j): + return [s.value for s in self.slice[i:j]] + + def __len__(self): + return len(self.slice) + + def lineno(self, n): + return getattr(self.slice[n], 'lineno', 0) + + def set_lineno(self, n, lineno): + self.slice[n].lineno = lineno + + def linespan(self, n): + startline = getattr(self.slice[n], 'lineno', 0) + endline = getattr(self.slice[n], 'endlineno', startline) + return startline, endline + + def lexpos(self, n): + return getattr(self.slice[n], 'lexpos', 0) + + def lexspan(self, n): + startpos = getattr(self.slice[n], 'lexpos', 0) + endpos = getattr(self.slice[n], 'endlexpos', startpos) + return startpos, endpos + + def error(self): + raise SyntaxError + +# ----------------------------------------------------------------------------- +# == LRParser == +# +# The LR Parsing engine. +# ----------------------------------------------------------------------------- + +class LRParser: + def __init__(self, lrtab, errorf): + self.productions = lrtab.lr_productions + self.action = lrtab.lr_action + self.goto = lrtab.lr_goto + self.errorfunc = errorf + self.set_defaulted_states() + self.errorok = True + + def errok(self): + self.errorok = True + + def restart(self): + del self.statestack[:] + del self.symstack[:] + sym = YaccSymbol() + sym.type = '$end' + self.symstack.append(sym) + self.statestack.append(0) + + # Defaulted state support. + # This method identifies parser states where there is only one possible reduction action. + # For such states, the parser can make a choose to make a rule reduction without consuming + # the next look-ahead token. This delayed invocation of the tokenizer can be useful in + # certain kinds of advanced parsing situations where the lexer and parser interact with + # each other or change states (i.e., manipulation of scope, lexer states, etc.). + # + # See: https://www.gnu.org/software/bison/manual/html_node/Default-Reductions.html#Default-Reductions + def set_defaulted_states(self): + self.defaulted_states = {} + for state, actions in self.action.items(): + rules = list(actions.values()) + if len(rules) == 1 and rules[0] < 0: + self.defaulted_states[state] = rules[0] + + def disable_defaulted_states(self): + self.defaulted_states = {} + + def parse(self, input=None, lexer=None, debug=False, tracking=False, tokenfunc=None): + if debug or yaccdevel: + if isinstance(debug, int): + debug = PlyLogger(sys.stderr) + return self.parsedebug(input, lexer, debug, tracking, tokenfunc) + elif tracking: + return self.parseopt(input, lexer, debug, tracking, tokenfunc) + else: + return self.parseopt_notrack(input, lexer, debug, tracking, tokenfunc) + + + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + # parsedebug(). + # + # This is the debugging enabled version of parse(). All changes made to the + # parsing engine should be made here. Optimized versions of this function + # are automatically created by the ply/ygen.py script. This script cuts out + # sections enclosed in markers such as this: + # + # #--! DEBUG + # statements + # #--! DEBUG + # + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + + def parsedebug(self, input=None, lexer=None, debug=False, tracking=False, tokenfunc=None): + #--! parsedebug-start + lookahead = None # Current lookahead symbol + lookaheadstack = [] # Stack of lookahead symbols + actions = self.action # Local reference to action table (to avoid lookup on self.) + goto = self.goto # Local reference to goto table (to avoid lookup on self.) + prod = self.productions # Local reference to production list (to avoid lookup on self.) + defaulted_states = self.defaulted_states # Local reference to defaulted states + pslice = YaccProduction(None) # Production object passed to grammar rules + errorcount = 0 # Used during error recovery + + #--! DEBUG + debug.info('PLY: PARSE DEBUG START') + #--! DEBUG + + # If no lexer was given, we will try to use the lex module + if not lexer: + from . import lex + lexer = lex.lexer + + # Set up the lexer and parser objects on pslice + pslice.lexer = lexer + pslice.parser = self + + # If input was supplied, pass to lexer + if input is not None: + lexer.input(input) + + if tokenfunc is None: + # Tokenize function + get_token = lexer.token + else: + get_token = tokenfunc + + # Set the parser() token method (sometimes used in error recovery) + self.token = get_token + + # Set up the state and symbol stacks + + statestack = [] # Stack of parsing states + self.statestack = statestack + symstack = [] # Stack of grammar symbols + self.symstack = symstack + + pslice.stack = symstack # Put in the production + errtoken = None # Err token + + # The start state is assumed to be (0,$end) + + statestack.append(0) + sym = YaccSymbol() + sym.type = '$end' + symstack.append(sym) + state = 0 + while True: + # Get the next symbol on the input. If a lookahead symbol + # is already set, we just use that. Otherwise, we'll pull + # the next token off of the lookaheadstack or from the lexer + + #--! DEBUG + debug.debug('') + debug.debug('State : %s', state) + #--! DEBUG + + if state not in defaulted_states: + if not lookahead: + if not lookaheadstack: + lookahead = get_token() # Get the next token + else: + lookahead = lookaheadstack.pop() + if not lookahead: + lookahead = YaccSymbol() + lookahead.type = '$end' + + # Check the action table + ltype = lookahead.type + t = actions[state].get(ltype) + else: + t = defaulted_states[state] + #--! DEBUG + debug.debug('Defaulted state %s: Reduce using %d', state, -t) + #--! DEBUG + + #--! DEBUG + debug.debug('Stack : %s', + ('%s . %s' % (' '.join([xx.type for xx in symstack][1:]), str(lookahead))).lstrip()) + #--! DEBUG + + if t is not None: + if t > 0: + # shift a symbol on the stack + statestack.append(t) + state = t + + #--! DEBUG + debug.debug('Action : Shift and goto state %s', t) + #--! DEBUG + + symstack.append(lookahead) + lookahead = None + + # Decrease error count on successful shift + if errorcount: + errorcount -= 1 + continue + + if t < 0: + # reduce a symbol on the stack, emit a production + p = prod[-t] + pname = p.name + plen = p.len + + # Get production function + sym = YaccSymbol() + sym.type = pname # Production name + sym.value = None + + #--! DEBUG + if plen: + debug.info('Action : Reduce rule [%s] with %s and goto state %d', p.str, + '['+','.join([format_stack_entry(_v.value) for _v in symstack[-plen:]])+']', + goto[statestack[-1-plen]][pname]) + else: + debug.info('Action : Reduce rule [%s] with %s and goto state %d', p.str, [], + goto[statestack[-1]][pname]) + + #--! DEBUG + + if plen: + targ = symstack[-plen-1:] + targ[0] = sym + + #--! TRACKING + if tracking: + t1 = targ[1] + sym.lineno = t1.lineno + sym.lexpos = t1.lexpos + t1 = targ[-1] + sym.endlineno = getattr(t1, 'endlineno', t1.lineno) + sym.endlexpos = getattr(t1, 'endlexpos', t1.lexpos) + #--! TRACKING + + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + # The code enclosed in this section is duplicated + # below as a performance optimization. Make sure + # changes get made in both locations. + + pslice.slice = targ + + try: + # Call the grammar rule with our special slice object + del symstack[-plen:] + self.state = state + p.callable(pslice) + del statestack[-plen:] + #--! DEBUG + debug.info('Result : %s', format_result(pslice[0])) + #--! DEBUG + symstack.append(sym) + state = goto[statestack[-1]][pname] + statestack.append(state) + except SyntaxError: + # If an error was set. Enter error recovery state + lookaheadstack.append(lookahead) # Save the current lookahead token + symstack.extend(targ[1:-1]) # Put the production slice back on the stack + statestack.pop() # Pop back one state (before the reduce) + state = statestack[-1] + sym.type = 'error' + sym.value = 'error' + lookahead = sym + errorcount = error_count + self.errorok = False + + continue + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + + else: + + #--! TRACKING + if tracking: + sym.lineno = lexer.lineno + sym.lexpos = lexer.lexpos + #--! TRACKING + + targ = [sym] + + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + # The code enclosed in this section is duplicated + # above as a performance optimization. Make sure + # changes get made in both locations. + + pslice.slice = targ + + try: + # Call the grammar rule with our special slice object + self.state = state + p.callable(pslice) + #--! DEBUG + debug.info('Result : %s', format_result(pslice[0])) + #--! DEBUG + symstack.append(sym) + state = goto[statestack[-1]][pname] + statestack.append(state) + except SyntaxError: + # If an error was set. Enter error recovery state + lookaheadstack.append(lookahead) # Save the current lookahead token + statestack.pop() # Pop back one state (before the reduce) + state = statestack[-1] + sym.type = 'error' + sym.value = 'error' + lookahead = sym + errorcount = error_count + self.errorok = False + + continue + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + + if t == 0: + n = symstack[-1] + result = getattr(n, 'value', None) + #--! DEBUG + debug.info('Done : Returning %s', format_result(result)) + debug.info('PLY: PARSE DEBUG END') + #--! DEBUG + return result + + if t is None: + + #--! DEBUG + debug.error('Error : %s', + ('%s . %s' % (' '.join([xx.type for xx in symstack][1:]), str(lookahead))).lstrip()) + #--! DEBUG + + # We have some kind of parsing error here. To handle + # this, we are going to push the current token onto + # the tokenstack and replace it with an 'error' token. + # If there are any synchronization rules, they may + # catch it. + # + # In addition to pushing the error token, we call call + # the user defined p_error() function if this is the + # first syntax error. This function is only called if + # errorcount == 0. + if errorcount == 0 or self.errorok: + errorcount = error_count + self.errorok = False + errtoken = lookahead + if errtoken.type == '$end': + errtoken = None # End of file! + if self.errorfunc: + if errtoken and not hasattr(errtoken, 'lexer'): + errtoken.lexer = lexer + self.state = state + tok = call_errorfunc(self.errorfunc, errtoken, self) + if self.errorok: + # User must have done some kind of panic + # mode recovery on their own. The + # returned token is the next lookahead + lookahead = tok + errtoken = None + continue + else: + if errtoken: + if hasattr(errtoken, 'lineno'): + lineno = lookahead.lineno + else: + lineno = 0 + if lineno: + sys.stderr.write('yacc: Syntax error at line %d, token=%s\n' % (lineno, errtoken.type)) + else: + sys.stderr.write('yacc: Syntax error, token=%s' % errtoken.type) + else: + sys.stderr.write('yacc: Parse error in input. EOF\n') + return + + else: + errorcount = error_count + + # case 1: the statestack only has 1 entry on it. If we're in this state, the + # entire parse has been rolled back and we're completely hosed. The token is + # discarded and we just keep going. + + if len(statestack) <= 1 and lookahead.type != '$end': + lookahead = None + errtoken = None + state = 0 + # Nuke the pushback stack + del lookaheadstack[:] + continue + + # case 2: the statestack has a couple of entries on it, but we're + # at the end of the file. nuke the top entry and generate an error token + + # Start nuking entries on the stack + if lookahead.type == '$end': + # Whoa. We're really hosed here. Bail out + return + + if lookahead.type != 'error': + sym = symstack[-1] + if sym.type == 'error': + # Hmmm. Error is on top of stack, we'll just nuke input + # symbol and continue + #--! TRACKING + if tracking: + sym.endlineno = getattr(lookahead, 'lineno', sym.lineno) + sym.endlexpos = getattr(lookahead, 'lexpos', sym.lexpos) + #--! TRACKING + lookahead = None + continue + + # Create the error symbol for the first time and make it the new lookahead symbol + t = YaccSymbol() + t.type = 'error' + + if hasattr(lookahead, 'lineno'): + t.lineno = t.endlineno = lookahead.lineno + if hasattr(lookahead, 'lexpos'): + t.lexpos = t.endlexpos = lookahead.lexpos + t.value = lookahead + lookaheadstack.append(lookahead) + lookahead = t + else: + sym = symstack.pop() + #--! TRACKING + if tracking: + lookahead.lineno = sym.lineno + lookahead.lexpos = sym.lexpos + #--! TRACKING + statestack.pop() + state = statestack[-1] + + continue + + # Call an error function here + raise RuntimeError('yacc: internal parser error!!!\n') + + #--! parsedebug-end + + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + # parseopt(). + # + # Optimized version of parse() method. DO NOT EDIT THIS CODE DIRECTLY! + # This code is automatically generated by the ply/ygen.py script. Make + # changes to the parsedebug() method instead. + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + + def parseopt(self, input=None, lexer=None, debug=False, tracking=False, tokenfunc=None): + #--! parseopt-start + lookahead = None # Current lookahead symbol + lookaheadstack = [] # Stack of lookahead symbols + actions = self.action # Local reference to action table (to avoid lookup on self.) + goto = self.goto # Local reference to goto table (to avoid lookup on self.) + prod = self.productions # Local reference to production list (to avoid lookup on self.) + defaulted_states = self.defaulted_states # Local reference to defaulted states + pslice = YaccProduction(None) # Production object passed to grammar rules + errorcount = 0 # Used during error recovery + + + # If no lexer was given, we will try to use the lex module + if not lexer: + from . import lex + lexer = lex.lexer + + # Set up the lexer and parser objects on pslice + pslice.lexer = lexer + pslice.parser = self + + # If input was supplied, pass to lexer + if input is not None: + lexer.input(input) + + if tokenfunc is None: + # Tokenize function + get_token = lexer.token + else: + get_token = tokenfunc + + # Set the parser() token method (sometimes used in error recovery) + self.token = get_token + + # Set up the state and symbol stacks + + statestack = [] # Stack of parsing states + self.statestack = statestack + symstack = [] # Stack of grammar symbols + self.symstack = symstack + + pslice.stack = symstack # Put in the production + errtoken = None # Err token + + # The start state is assumed to be (0,$end) + + statestack.append(0) + sym = YaccSymbol() + sym.type = '$end' + symstack.append(sym) + state = 0 + while True: + # Get the next symbol on the input. If a lookahead symbol + # is already set, we just use that. Otherwise, we'll pull + # the next token off of the lookaheadstack or from the lexer + + + if state not in defaulted_states: + if not lookahead: + if not lookaheadstack: + lookahead = get_token() # Get the next token + else: + lookahead = lookaheadstack.pop() + if not lookahead: + lookahead = YaccSymbol() + lookahead.type = '$end' + + # Check the action table + ltype = lookahead.type + t = actions[state].get(ltype) + else: + t = defaulted_states[state] + + + if t is not None: + if t > 0: + # shift a symbol on the stack + statestack.append(t) + state = t + + + symstack.append(lookahead) + lookahead = None + + # Decrease error count on successful shift + if errorcount: + errorcount -= 1 + continue + + if t < 0: + # reduce a symbol on the stack, emit a production + p = prod[-t] + pname = p.name + plen = p.len + + # Get production function + sym = YaccSymbol() + sym.type = pname # Production name + sym.value = None + + + if plen: + targ = symstack[-plen-1:] + targ[0] = sym + + #--! TRACKING + if tracking: + t1 = targ[1] + sym.lineno = t1.lineno + sym.lexpos = t1.lexpos + t1 = targ[-1] + sym.endlineno = getattr(t1, 'endlineno', t1.lineno) + sym.endlexpos = getattr(t1, 'endlexpos', t1.lexpos) + #--! TRACKING + + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + # The code enclosed in this section is duplicated + # below as a performance optimization. Make sure + # changes get made in both locations. + + pslice.slice = targ + + try: + # Call the grammar rule with our special slice object + del symstack[-plen:] + self.state = state + p.callable(pslice) + del statestack[-plen:] + symstack.append(sym) + state = goto[statestack[-1]][pname] + statestack.append(state) + except SyntaxError: + # If an error was set. Enter error recovery state + lookaheadstack.append(lookahead) # Save the current lookahead token + symstack.extend(targ[1:-1]) # Put the production slice back on the stack + statestack.pop() # Pop back one state (before the reduce) + state = statestack[-1] + sym.type = 'error' + sym.value = 'error' + lookahead = sym + errorcount = error_count + self.errorok = False + + continue + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + + else: + + #--! TRACKING + if tracking: + sym.lineno = lexer.lineno + sym.lexpos = lexer.lexpos + #--! TRACKING + + targ = [sym] + + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + # The code enclosed in this section is duplicated + # above as a performance optimization. Make sure + # changes get made in both locations. + + pslice.slice = targ + + try: + # Call the grammar rule with our special slice object + self.state = state + p.callable(pslice) + symstack.append(sym) + state = goto[statestack[-1]][pname] + statestack.append(state) + except SyntaxError: + # If an error was set. Enter error recovery state + lookaheadstack.append(lookahead) # Save the current lookahead token + statestack.pop() # Pop back one state (before the reduce) + state = statestack[-1] + sym.type = 'error' + sym.value = 'error' + lookahead = sym + errorcount = error_count + self.errorok = False + + continue + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + + if t == 0: + n = symstack[-1] + result = getattr(n, 'value', None) + return result + + if t is None: + + + # We have some kind of parsing error here. To handle + # this, we are going to push the current token onto + # the tokenstack and replace it with an 'error' token. + # If there are any synchronization rules, they may + # catch it. + # + # In addition to pushing the error token, we call call + # the user defined p_error() function if this is the + # first syntax error. This function is only called if + # errorcount == 0. + if errorcount == 0 or self.errorok: + errorcount = error_count + self.errorok = False + errtoken = lookahead + if errtoken.type == '$end': + errtoken = None # End of file! + if self.errorfunc: + if errtoken and not hasattr(errtoken, 'lexer'): + errtoken.lexer = lexer + self.state = state + tok = call_errorfunc(self.errorfunc, errtoken, self) + if self.errorok: + # User must have done some kind of panic + # mode recovery on their own. The + # returned token is the next lookahead + lookahead = tok + errtoken = None + continue + else: + if errtoken: + if hasattr(errtoken, 'lineno'): + lineno = lookahead.lineno + else: + lineno = 0 + if lineno: + sys.stderr.write('yacc: Syntax error at line %d, token=%s\n' % (lineno, errtoken.type)) + else: + sys.stderr.write('yacc: Syntax error, token=%s' % errtoken.type) + else: + sys.stderr.write('yacc: Parse error in input. EOF\n') + return + + else: + errorcount = error_count + + # case 1: the statestack only has 1 entry on it. If we're in this state, the + # entire parse has been rolled back and we're completely hosed. The token is + # discarded and we just keep going. + + if len(statestack) <= 1 and lookahead.type != '$end': + lookahead = None + errtoken = None + state = 0 + # Nuke the pushback stack + del lookaheadstack[:] + continue + + # case 2: the statestack has a couple of entries on it, but we're + # at the end of the file. nuke the top entry and generate an error token + + # Start nuking entries on the stack + if lookahead.type == '$end': + # Whoa. We're really hosed here. Bail out + return + + if lookahead.type != 'error': + sym = symstack[-1] + if sym.type == 'error': + # Hmmm. Error is on top of stack, we'll just nuke input + # symbol and continue + #--! TRACKING + if tracking: + sym.endlineno = getattr(lookahead, 'lineno', sym.lineno) + sym.endlexpos = getattr(lookahead, 'lexpos', sym.lexpos) + #--! TRACKING + lookahead = None + continue + + # Create the error symbol for the first time and make it the new lookahead symbol + t = YaccSymbol() + t.type = 'error' + + if hasattr(lookahead, 'lineno'): + t.lineno = t.endlineno = lookahead.lineno + if hasattr(lookahead, 'lexpos'): + t.lexpos = t.endlexpos = lookahead.lexpos + t.value = lookahead + lookaheadstack.append(lookahead) + lookahead = t + else: + sym = symstack.pop() + #--! TRACKING + if tracking: + lookahead.lineno = sym.lineno + lookahead.lexpos = sym.lexpos + #--! TRACKING + statestack.pop() + state = statestack[-1] + + continue + + # Call an error function here + raise RuntimeError('yacc: internal parser error!!!\n') + + #--! parseopt-end + + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + # parseopt_notrack(). + # + # Optimized version of parseopt() with line number tracking removed. + # DO NOT EDIT THIS CODE DIRECTLY. This code is automatically generated + # by the ply/ygen.py script. Make changes to the parsedebug() method instead. + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + + def parseopt_notrack(self, input=None, lexer=None, debug=False, tracking=False, tokenfunc=None): + #--! parseopt-notrack-start + lookahead = None # Current lookahead symbol + lookaheadstack = [] # Stack of lookahead symbols + actions = self.action # Local reference to action table (to avoid lookup on self.) + goto = self.goto # Local reference to goto table (to avoid lookup on self.) + prod = self.productions # Local reference to production list (to avoid lookup on self.) + defaulted_states = self.defaulted_states # Local reference to defaulted states + pslice = YaccProduction(None) # Production object passed to grammar rules + errorcount = 0 # Used during error recovery + + + # If no lexer was given, we will try to use the lex module + if not lexer: + from . import lex + lexer = lex.lexer + + # Set up the lexer and parser objects on pslice + pslice.lexer = lexer + pslice.parser = self + + # If input was supplied, pass to lexer + if input is not None: + lexer.input(input) + + if tokenfunc is None: + # Tokenize function + get_token = lexer.token + else: + get_token = tokenfunc + + # Set the parser() token method (sometimes used in error recovery) + self.token = get_token + + # Set up the state and symbol stacks + + statestack = [] # Stack of parsing states + self.statestack = statestack + symstack = [] # Stack of grammar symbols + self.symstack = symstack + + pslice.stack = symstack # Put in the production + errtoken = None # Err token + + # The start state is assumed to be (0,$end) + + statestack.append(0) + sym = YaccSymbol() + sym.type = '$end' + symstack.append(sym) + state = 0 + while True: + # Get the next symbol on the input. If a lookahead symbol + # is already set, we just use that. Otherwise, we'll pull + # the next token off of the lookaheadstack or from the lexer + + + if state not in defaulted_states: + if not lookahead: + if not lookaheadstack: + lookahead = get_token() # Get the next token + else: + lookahead = lookaheadstack.pop() + if not lookahead: + lookahead = YaccSymbol() + lookahead.type = '$end' + + # Check the action table + ltype = lookahead.type + t = actions[state].get(ltype) + else: + t = defaulted_states[state] + + + if t is not None: + if t > 0: + # shift a symbol on the stack + statestack.append(t) + state = t + + + symstack.append(lookahead) + lookahead = None + + # Decrease error count on successful shift + if errorcount: + errorcount -= 1 + continue + + if t < 0: + # reduce a symbol on the stack, emit a production + p = prod[-t] + pname = p.name + plen = p.len + + # Get production function + sym = YaccSymbol() + sym.type = pname # Production name + sym.value = None + + + if plen: + targ = symstack[-plen-1:] + targ[0] = sym + + + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + # The code enclosed in this section is duplicated + # below as a performance optimization. Make sure + # changes get made in both locations. + + pslice.slice = targ + + try: + # Call the grammar rule with our special slice object + del symstack[-plen:] + self.state = state + p.callable(pslice) + del statestack[-plen:] + symstack.append(sym) + state = goto[statestack[-1]][pname] + statestack.append(state) + except SyntaxError: + # If an error was set. Enter error recovery state + lookaheadstack.append(lookahead) # Save the current lookahead token + symstack.extend(targ[1:-1]) # Put the production slice back on the stack + statestack.pop() # Pop back one state (before the reduce) + state = statestack[-1] + sym.type = 'error' + sym.value = 'error' + lookahead = sym + errorcount = error_count + self.errorok = False + + continue + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + + else: + + + targ = [sym] + + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + # The code enclosed in this section is duplicated + # above as a performance optimization. Make sure + # changes get made in both locations. + + pslice.slice = targ + + try: + # Call the grammar rule with our special slice object + self.state = state + p.callable(pslice) + symstack.append(sym) + state = goto[statestack[-1]][pname] + statestack.append(state) + except SyntaxError: + # If an error was set. Enter error recovery state + lookaheadstack.append(lookahead) # Save the current lookahead token + statestack.pop() # Pop back one state (before the reduce) + state = statestack[-1] + sym.type = 'error' + sym.value = 'error' + lookahead = sym + errorcount = error_count + self.errorok = False + + continue + # !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! + + if t == 0: + n = symstack[-1] + result = getattr(n, 'value', None) + return result + + if t is None: + + + # We have some kind of parsing error here. To handle + # this, we are going to push the current token onto + # the tokenstack and replace it with an 'error' token. + # If there are any synchronization rules, they may + # catch it. + # + # In addition to pushing the error token, we call call + # the user defined p_error() function if this is the + # first syntax error. This function is only called if + # errorcount == 0. + if errorcount == 0 or self.errorok: + errorcount = error_count + self.errorok = False + errtoken = lookahead + if errtoken.type == '$end': + errtoken = None # End of file! + if self.errorfunc: + if errtoken and not hasattr(errtoken, 'lexer'): + errtoken.lexer = lexer + self.state = state + tok = call_errorfunc(self.errorfunc, errtoken, self) + if self.errorok: + # User must have done some kind of panic + # mode recovery on their own. The + # returned token is the next lookahead + lookahead = tok + errtoken = None + continue + else: + if errtoken: + if hasattr(errtoken, 'lineno'): + lineno = lookahead.lineno + else: + lineno = 0 + if lineno: + sys.stderr.write('yacc: Syntax error at line %d, token=%s\n' % (lineno, errtoken.type)) + else: + sys.stderr.write('yacc: Syntax error, token=%s' % errtoken.type) + else: + sys.stderr.write('yacc: Parse error in input. EOF\n') + return + + else: + errorcount = error_count + + # case 1: the statestack only has 1 entry on it. If we're in this state, the + # entire parse has been rolled back and we're completely hosed. The token is + # discarded and we just keep going. + + if len(statestack) <= 1 and lookahead.type != '$end': + lookahead = None + errtoken = None + state = 0 + # Nuke the pushback stack + del lookaheadstack[:] + continue + + # case 2: the statestack has a couple of entries on it, but we're + # at the end of the file. nuke the top entry and generate an error token + + # Start nuking entries on the stack + if lookahead.type == '$end': + # Whoa. We're really hosed here. Bail out + return + + if lookahead.type != 'error': + sym = symstack[-1] + if sym.type == 'error': + # Hmmm. Error is on top of stack, we'll just nuke input + # symbol and continue + lookahead = None + continue + + # Create the error symbol for the first time and make it the new lookahead symbol + t = YaccSymbol() + t.type = 'error' + + if hasattr(lookahead, 'lineno'): + t.lineno = t.endlineno = lookahead.lineno + if hasattr(lookahead, 'lexpos'): + t.lexpos = t.endlexpos = lookahead.lexpos + t.value = lookahead + lookaheadstack.append(lookahead) + lookahead = t + else: + sym = symstack.pop() + statestack.pop() + state = statestack[-1] + + continue + + # Call an error function here + raise RuntimeError('yacc: internal parser error!!!\n') + + #--! parseopt-notrack-end + +# ----------------------------------------------------------------------------- +# === Grammar Representation === +# +# The following functions, classes, and variables are used to represent and +# manipulate the rules that make up a grammar. +# ----------------------------------------------------------------------------- + +# regex matching identifiers +_is_identifier = re.compile(r'^[a-zA-Z0-9_-]+$') + +# ----------------------------------------------------------------------------- +# class Production: +# +# This class stores the raw information about a single production or grammar rule. +# A grammar rule refers to a specification such as this: +# +# expr : expr PLUS term +# +# Here are the basic attributes defined on all productions +# +# name - Name of the production. For example 'expr' +# prod - A list of symbols on the right side ['expr','PLUS','term'] +# prec - Production precedence level +# number - Production number. +# func - Function that executes on reduce +# file - File where production function is defined +# lineno - Line number where production function is defined +# +# The following attributes are defined or optional. +# +# len - Length of the production (number of symbols on right hand side) +# usyms - Set of unique symbols found in the production +# ----------------------------------------------------------------------------- + +class Production(object): + reduced = 0 + def __init__(self, number, name, prod, precedence=('right', 0), func=None, file='', line=0): + self.name = name + self.prod = tuple(prod) + self.number = number + self.func = func + self.callable = None + self.file = file + self.line = line + self.prec = precedence + + # Internal settings used during table construction + + self.len = len(self.prod) # Length of the production + + # Create a list of unique production symbols used in the production + self.usyms = [] + for s in self.prod: + if s not in self.usyms: + self.usyms.append(s) + + # List of all LR items for the production + self.lr_items = [] + self.lr_next = None + + # Create a string representation + if self.prod: + self.str = '%s -> %s' % (self.name, ' '.join(self.prod)) + else: + self.str = '%s -> ' % self.name + + def __str__(self): + return self.str + + def __repr__(self): + return 'Production(' + str(self) + ')' + + def __len__(self): + return len(self.prod) + + def __nonzero__(self): + return 1 + + def __getitem__(self, index): + return self.prod[index] + + # Return the nth lr_item from the production (or None if at the end) + def lr_item(self, n): + if n > len(self.prod): + return None + p = LRItem(self, n) + # Precompute the list of productions immediately following. + try: + p.lr_after = Prodnames[p.prod[n+1]] + except (IndexError, KeyError): + p.lr_after = [] + try: + p.lr_before = p.prod[n-1] + except IndexError: + p.lr_before = None + return p + + # Bind the production function name to a callable + def bind(self, pdict): + if self.func: + self.callable = pdict[self.func] + +# This class serves as a minimal standin for Production objects when +# reading table data from files. It only contains information +# actually used by the LR parsing engine, plus some additional +# debugging information. +class MiniProduction(object): + def __init__(self, str, name, len, func, file, line): + self.name = name + self.len = len + self.func = func + self.callable = None + self.file = file + self.line = line + self.str = str + + def __str__(self): + return self.str + + def __repr__(self): + return 'MiniProduction(%s)' % self.str + + # Bind the production function name to a callable + def bind(self, pdict): + if self.func: + self.callable = pdict[self.func] + + +# ----------------------------------------------------------------------------- +# class LRItem +# +# This class represents a specific stage of parsing a production rule. For +# example: +# +# expr : expr . PLUS term +# +# In the above, the "." represents the current location of the parse. Here +# basic attributes: +# +# name - Name of the production. For example 'expr' +# prod - A list of symbols on the right side ['expr','.', 'PLUS','term'] +# number - Production number. +# +# lr_next Next LR item. Example, if we are ' expr -> expr . PLUS term' +# then lr_next refers to 'expr -> expr PLUS . term' +# lr_index - LR item index (location of the ".") in the prod list. +# lookaheads - LALR lookahead symbols for this item +# len - Length of the production (number of symbols on right hand side) +# lr_after - List of all productions that immediately follow +# lr_before - Grammar symbol immediately before +# ----------------------------------------------------------------------------- + +class LRItem(object): + def __init__(self, p, n): + self.name = p.name + self.prod = list(p.prod) + self.number = p.number + self.lr_index = n + self.lookaheads = {} + self.prod.insert(n, '.') + self.prod = tuple(self.prod) + self.len = len(self.prod) + self.usyms = p.usyms + + def __str__(self): + if self.prod: + s = '%s -> %s' % (self.name, ' '.join(self.prod)) + else: + s = '%s -> ' % self.name + return s + + def __repr__(self): + return 'LRItem(' + str(self) + ')' + +# ----------------------------------------------------------------------------- +# rightmost_terminal() +# +# Return the rightmost terminal from a list of symbols. Used in add_production() +# ----------------------------------------------------------------------------- +def rightmost_terminal(symbols, terminals): + i = len(symbols) - 1 + while i >= 0: + if symbols[i] in terminals: + return symbols[i] + i -= 1 + return None + +# ----------------------------------------------------------------------------- +# === GRAMMAR CLASS === +# +# The following class represents the contents of the specified grammar along +# with various computed properties such as first sets, follow sets, LR items, etc. +# This data is used for critical parts of the table generation process later. +# ----------------------------------------------------------------------------- + +class GrammarError(YaccError): + pass + +class Grammar(object): + def __init__(self, terminals): + self.Productions = [None] # A list of all of the productions. The first + # entry is always reserved for the purpose of + # building an augmented grammar + + self.Prodnames = {} # A dictionary mapping the names of nonterminals to a list of all + # productions of that nonterminal. + + self.Prodmap = {} # A dictionary that is only used to detect duplicate + # productions. + + self.Terminals = {} # A dictionary mapping the names of terminal symbols to a + # list of the rules where they are used. + + for term in terminals: + self.Terminals[term] = [] + + self.Terminals['error'] = [] + + self.Nonterminals = {} # A dictionary mapping names of nonterminals to a list + # of rule numbers where they are used. + + self.First = {} # A dictionary of precomputed FIRST(x) symbols + + self.Follow = {} # A dictionary of precomputed FOLLOW(x) symbols + + self.Precedence = {} # Precedence rules for each terminal. Contains tuples of the + # form ('right',level) or ('nonassoc', level) or ('left',level) + + self.UsedPrecedence = set() # Precedence rules that were actually used by the grammer. + # This is only used to provide error checking and to generate + # a warning about unused precedence rules. + + self.Start = None # Starting symbol for the grammar + + + def __len__(self): + return len(self.Productions) + + def __getitem__(self, index): + return self.Productions[index] + + # ----------------------------------------------------------------------------- + # set_precedence() + # + # Sets the precedence for a given terminal. assoc is the associativity such as + # 'left','right', or 'nonassoc'. level is a numeric level. + # + # ----------------------------------------------------------------------------- + + def set_precedence(self, term, assoc, level): + assert self.Productions == [None], 'Must call set_precedence() before add_production()' + if term in self.Precedence: + raise GrammarError('Precedence already specified for terminal %r' % term) + if assoc not in ['left', 'right', 'nonassoc']: + raise GrammarError("Associativity must be one of 'left','right', or 'nonassoc'") + self.Precedence[term] = (assoc, level) + + # ----------------------------------------------------------------------------- + # add_production() + # + # Given an action function, this function assembles a production rule and + # computes its precedence level. + # + # The production rule is supplied as a list of symbols. For example, + # a rule such as 'expr : expr PLUS term' has a production name of 'expr' and + # symbols ['expr','PLUS','term']. + # + # Precedence is determined by the precedence of the right-most non-terminal + # or the precedence of a terminal specified by %prec. + # + # A variety of error checks are performed to make sure production symbols + # are valid and that %prec is used correctly. + # ----------------------------------------------------------------------------- + + def add_production(self, prodname, syms, func=None, file='', line=0): + + if prodname in self.Terminals: + raise GrammarError('%s:%d: Illegal rule name %r. Already defined as a token' % (file, line, prodname)) + if prodname == 'error': + raise GrammarError('%s:%d: Illegal rule name %r. error is a reserved word' % (file, line, prodname)) + if not _is_identifier.match(prodname): + raise GrammarError('%s:%d: Illegal rule name %r' % (file, line, prodname)) + + # Look for literal tokens + for n, s in enumerate(syms): + if s[0] in "'\"": + try: + c = eval(s) + if (len(c) > 1): + raise GrammarError('%s:%d: Literal token %s in rule %r may only be a single character' % + (file, line, s, prodname)) + if c not in self.Terminals: + self.Terminals[c] = [] + syms[n] = c + continue + except SyntaxError: + pass + if not _is_identifier.match(s) and s != '%prec': + raise GrammarError('%s:%d: Illegal name %r in rule %r' % (file, line, s, prodname)) + + # Determine the precedence level + if '%prec' in syms: + if syms[-1] == '%prec': + raise GrammarError('%s:%d: Syntax error. Nothing follows %%prec' % (file, line)) + if syms[-2] != '%prec': + raise GrammarError('%s:%d: Syntax error. %%prec can only appear at the end of a grammar rule' % + (file, line)) + precname = syms[-1] + prodprec = self.Precedence.get(precname) + if not prodprec: + raise GrammarError('%s:%d: Nothing known about the precedence of %r' % (file, line, precname)) + else: + self.UsedPrecedence.add(precname) + del syms[-2:] # Drop %prec from the rule + else: + # If no %prec, precedence is determined by the rightmost terminal symbol + precname = rightmost_terminal(syms, self.Terminals) + prodprec = self.Precedence.get(precname, ('right', 0)) + + # See if the rule is already in the rulemap + map = '%s -> %s' % (prodname, syms) + if map in self.Prodmap: + m = self.Prodmap[map] + raise GrammarError('%s:%d: Duplicate rule %s. ' % (file, line, m) + + 'Previous definition at %s:%d' % (m.file, m.line)) + + # From this point on, everything is valid. Create a new Production instance + pnumber = len(self.Productions) + if prodname not in self.Nonterminals: + self.Nonterminals[prodname] = [] + + # Add the production number to Terminals and Nonterminals + for t in syms: + if t in self.Terminals: + self.Terminals[t].append(pnumber) + else: + if t not in self.Nonterminals: + self.Nonterminals[t] = [] + self.Nonterminals[t].append(pnumber) + + # Create a production and add it to the list of productions + p = Production(pnumber, prodname, syms, prodprec, func, file, line) + self.Productions.append(p) + self.Prodmap[map] = p + + # Add to the global productions list + try: + self.Prodnames[prodname].append(p) + except KeyError: + self.Prodnames[prodname] = [p] + + # ----------------------------------------------------------------------------- + # set_start() + # + # Sets the starting symbol and creates the augmented grammar. Production + # rule 0 is S' -> start where start is the start symbol. + # ----------------------------------------------------------------------------- + + def set_start(self, start=None): + if not start: + start = self.Productions[1].name + if start not in self.Nonterminals: + raise GrammarError('start symbol %s undefined' % start) + self.Productions[0] = Production(0, "S'", [start]) + self.Nonterminals[start].append(0) + self.Start = start + + # ----------------------------------------------------------------------------- + # find_unreachable() + # + # Find all of the nonterminal symbols that can't be reached from the starting + # symbol. Returns a list of nonterminals that can't be reached. + # ----------------------------------------------------------------------------- + + def find_unreachable(self): + + # Mark all symbols that are reachable from a symbol s + def mark_reachable_from(s): + if s in reachable: + return + reachable.add(s) + for p in self.Prodnames.get(s, []): + for r in p.prod: + mark_reachable_from(r) + + reachable = set() + mark_reachable_from(self.Productions[0].prod[0]) + return [s for s in self.Nonterminals if s not in reachable] + + # ----------------------------------------------------------------------------- + # infinite_cycles() + # + # This function looks at the various parsing rules and tries to detect + # infinite recursion cycles (grammar rules where there is no possible way + # to derive a string of only terminals). + # ----------------------------------------------------------------------------- + + def infinite_cycles(self): + terminates = {} + + # Terminals: + for t in self.Terminals: + terminates[t] = True + + terminates['$end'] = True + + # Nonterminals: + + # Initialize to false: + for n in self.Nonterminals: + terminates[n] = False + + # Then propagate termination until no change: + while True: + some_change = False + for (n, pl) in self.Prodnames.items(): + # Nonterminal n terminates iff any of its productions terminates. + for p in pl: + # Production p terminates iff all of its rhs symbols terminate. + for s in p.prod: + if not terminates[s]: + # The symbol s does not terminate, + # so production p does not terminate. + p_terminates = False + break + else: + # didn't break from the loop, + # so every symbol s terminates + # so production p terminates. + p_terminates = True + + if p_terminates: + # symbol n terminates! + if not terminates[n]: + terminates[n] = True + some_change = True + # Don't need to consider any more productions for this n. + break + + if not some_change: + break + + infinite = [] + for (s, term) in terminates.items(): + if not term: + if s not in self.Prodnames and s not in self.Terminals and s != 'error': + # s is used-but-not-defined, and we've already warned of that, + # so it would be overkill to say that it's also non-terminating. + pass + else: + infinite.append(s) + + return infinite + + # ----------------------------------------------------------------------------- + # undefined_symbols() + # + # Find all symbols that were used the grammar, but not defined as tokens or + # grammar rules. Returns a list of tuples (sym, prod) where sym in the symbol + # and prod is the production where the symbol was used. + # ----------------------------------------------------------------------------- + def undefined_symbols(self): + result = [] + for p in self.Productions: + if not p: + continue + + for s in p.prod: + if s not in self.Prodnames and s not in self.Terminals and s != 'error': + result.append((s, p)) + return result + + # ----------------------------------------------------------------------------- + # unused_terminals() + # + # Find all terminals that were defined, but not used by the grammar. Returns + # a list of all symbols. + # ----------------------------------------------------------------------------- + def unused_terminals(self): + unused_tok = [] + for s, v in self.Terminals.items(): + if s != 'error' and not v: + unused_tok.append(s) + + return unused_tok + + # ------------------------------------------------------------------------------ + # unused_rules() + # + # Find all grammar rules that were defined, but not used (maybe not reachable) + # Returns a list of productions. + # ------------------------------------------------------------------------------ + + def unused_rules(self): + unused_prod = [] + for s, v in self.Nonterminals.items(): + if not v: + p = self.Prodnames[s][0] + unused_prod.append(p) + return unused_prod + + # ----------------------------------------------------------------------------- + # unused_precedence() + # + # Returns a list of tuples (term,precedence) corresponding to precedence + # rules that were never used by the grammar. term is the name of the terminal + # on which precedence was applied and precedence is a string such as 'left' or + # 'right' corresponding to the type of precedence. + # ----------------------------------------------------------------------------- + + def unused_precedence(self): + unused = [] + for termname in self.Precedence: + if not (termname in self.Terminals or termname in self.UsedPrecedence): + unused.append((termname, self.Precedence[termname][0])) + + return unused + + # ------------------------------------------------------------------------- + # _first() + # + # Compute the value of FIRST1(beta) where beta is a tuple of symbols. + # + # During execution of compute_first1, the result may be incomplete. + # Afterward (e.g., when called from compute_follow()), it will be complete. + # ------------------------------------------------------------------------- + def _first(self, beta): + + # We are computing First(x1,x2,x3,...,xn) + result = [] + for x in beta: + x_produces_empty = False + + # Add all the non- symbols of First[x] to the result. + for f in self.First[x]: + if f == '': + x_produces_empty = True + else: + if f not in result: + result.append(f) + + if x_produces_empty: + # We have to consider the next x in beta, + # i.e. stay in the loop. + pass + else: + # We don't have to consider any further symbols in beta. + break + else: + # There was no 'break' from the loop, + # so x_produces_empty was true for all x in beta, + # so beta produces empty as well. + result.append('') + + return result + + # ------------------------------------------------------------------------- + # compute_first() + # + # Compute the value of FIRST1(X) for all symbols + # ------------------------------------------------------------------------- + def compute_first(self): + if self.First: + return self.First + + # Terminals: + for t in self.Terminals: + self.First[t] = [t] + + self.First['$end'] = ['$end'] + + # Nonterminals: + + # Initialize to the empty set: + for n in self.Nonterminals: + self.First[n] = [] + + # Then propagate symbols until no change: + while True: + some_change = False + for n in self.Nonterminals: + for p in self.Prodnames[n]: + for f in self._first(p.prod): + if f not in self.First[n]: + self.First[n].append(f) + some_change = True + if not some_change: + break + + return self.First + + # --------------------------------------------------------------------- + # compute_follow() + # + # Computes all of the follow sets for every non-terminal symbol. The + # follow set is the set of all symbols that might follow a given + # non-terminal. See the Dragon book, 2nd Ed. p. 189. + # --------------------------------------------------------------------- + def compute_follow(self, start=None): + # If already computed, return the result + if self.Follow: + return self.Follow + + # If first sets not computed yet, do that first. + if not self.First: + self.compute_first() + + # Add '$end' to the follow list of the start symbol + for k in self.Nonterminals: + self.Follow[k] = [] + + if not start: + start = self.Productions[1].name + + self.Follow[start] = ['$end'] + + while True: + didadd = False + for p in self.Productions[1:]: + # Here is the production set + for i, B in enumerate(p.prod): + if B in self.Nonterminals: + # Okay. We got a non-terminal in a production + fst = self._first(p.prod[i+1:]) + hasempty = False + for f in fst: + if f != '' and f not in self.Follow[B]: + self.Follow[B].append(f) + didadd = True + if f == '': + hasempty = True + if hasempty or i == (len(p.prod)-1): + # Add elements of follow(a) to follow(b) + for f in self.Follow[p.name]: + if f not in self.Follow[B]: + self.Follow[B].append(f) + didadd = True + if not didadd: + break + return self.Follow + + + # ----------------------------------------------------------------------------- + # build_lritems() + # + # This function walks the list of productions and builds a complete set of the + # LR items. The LR items are stored in two ways: First, they are uniquely + # numbered and placed in the list _lritems. Second, a linked list of LR items + # is built for each production. For example: + # + # E -> E PLUS E + # + # Creates the list + # + # [E -> . E PLUS E, E -> E . PLUS E, E -> E PLUS . E, E -> E PLUS E . ] + # ----------------------------------------------------------------------------- + + def build_lritems(self): + for p in self.Productions: + lastlri = p + i = 0 + lr_items = [] + while True: + if i > len(p): + lri = None + else: + lri = LRItem(p, i) + # Precompute the list of productions immediately following + try: + lri.lr_after = self.Prodnames[lri.prod[i+1]] + except (IndexError, KeyError): + lri.lr_after = [] + try: + lri.lr_before = lri.prod[i-1] + except IndexError: + lri.lr_before = None + + lastlri.lr_next = lri + if not lri: + break + lr_items.append(lri) + lastlri = lri + i += 1 + p.lr_items = lr_items + +# ----------------------------------------------------------------------------- +# == Class LRTable == +# +# This basic class represents a basic table of LR parsing information. +# Methods for generating the tables are not defined here. They are defined +# in the derived class LRGeneratedTable. +# ----------------------------------------------------------------------------- + +class VersionError(YaccError): + pass + +class LRTable(object): + def __init__(self): + self.lr_action = None + self.lr_goto = None + self.lr_productions = None + self.lr_method = None + + def read_table(self, module): + if isinstance(module, types.ModuleType): + parsetab = module + else: + exec('import %s' % module) + parsetab = sys.modules[module] + + if parsetab._tabversion != __tabversion__: + raise VersionError('yacc table file version is out of date') + + self.lr_action = parsetab._lr_action + self.lr_goto = parsetab._lr_goto + + self.lr_productions = [] + for p in parsetab._lr_productions: + self.lr_productions.append(MiniProduction(*p)) + + self.lr_method = parsetab._lr_method + return parsetab._lr_signature + + def read_pickle(self, filename): + try: + import cPickle as pickle + except ImportError: + import pickle + + if not os.path.exists(filename): + raise ImportError + + in_f = open(filename, 'rb') + + tabversion = pickle.load(in_f) + if tabversion != __tabversion__: + raise VersionError('yacc table file version is out of date') + self.lr_method = pickle.load(in_f) + signature = pickle.load(in_f) + self.lr_action = pickle.load(in_f) + self.lr_goto = pickle.load(in_f) + productions = pickle.load(in_f) + + self.lr_productions = [] + for p in productions: + self.lr_productions.append(MiniProduction(*p)) + + in_f.close() + return signature + + # Bind all production function names to callable objects in pdict + def bind_callables(self, pdict): + for p in self.lr_productions: + p.bind(pdict) + + +# ----------------------------------------------------------------------------- +# === LR Generator === +# +# The following classes and functions are used to generate LR parsing tables on +# a grammar. +# ----------------------------------------------------------------------------- + +# ----------------------------------------------------------------------------- +# digraph() +# traverse() +# +# The following two functions are used to compute set valued functions +# of the form: +# +# F(x) = F'(x) U U{F(y) | x R y} +# +# This is used to compute the values of Read() sets as well as FOLLOW sets +# in LALR(1) generation. +# +# Inputs: X - An input set +# R - A relation +# FP - Set-valued function +# ------------------------------------------------------------------------------ + +def digraph(X, R, FP): + N = {} + for x in X: + N[x] = 0 + stack = [] + F = {} + for x in X: + if N[x] == 0: + traverse(x, N, stack, F, X, R, FP) + return F + +def traverse(x, N, stack, F, X, R, FP): + stack.append(x) + d = len(stack) + N[x] = d + F[x] = FP(x) # F(X) <- F'(x) + + rel = R(x) # Get y's related to x + for y in rel: + if N[y] == 0: + traverse(y, N, stack, F, X, R, FP) + N[x] = min(N[x], N[y]) + for a in F.get(y, []): + if a not in F[x]: + F[x].append(a) + if N[x] == d: + N[stack[-1]] = MAXINT + F[stack[-1]] = F[x] + element = stack.pop() + while element != x: + N[stack[-1]] = MAXINT + F[stack[-1]] = F[x] + element = stack.pop() + +class LALRError(YaccError): + pass + +# ----------------------------------------------------------------------------- +# == LRGeneratedTable == +# +# This class implements the LR table generation algorithm. There are no +# public methods except for write() +# ----------------------------------------------------------------------------- + +class LRGeneratedTable(LRTable): + def __init__(self, grammar, method='LALR', log=None): + if method not in ['SLR', 'LALR']: + raise LALRError('Unsupported method %s' % method) + + self.grammar = grammar + self.lr_method = method + + # Set up the logger + if not log: + log = NullLogger() + self.log = log + + # Internal attributes + self.lr_action = {} # Action table + self.lr_goto = {} # Goto table + self.lr_productions = grammar.Productions # Copy of grammar Production array + self.lr_goto_cache = {} # Cache of computed gotos + self.lr0_cidhash = {} # Cache of closures + + self._add_count = 0 # Internal counter used to detect cycles + + # Diagonistic information filled in by the table generator + self.sr_conflict = 0 + self.rr_conflict = 0 + self.conflicts = [] # List of conflicts + + self.sr_conflicts = [] + self.rr_conflicts = [] + + # Build the tables + self.grammar.build_lritems() + self.grammar.compute_first() + self.grammar.compute_follow() + self.lr_parse_table() + + # Compute the LR(0) closure operation on I, where I is a set of LR(0) items. + + def lr0_closure(self, I): + self._add_count += 1 + + # Add everything in I to J + J = I[:] + didadd = True + while didadd: + didadd = False + for j in J: + for x in j.lr_after: + if getattr(x, 'lr0_added', 0) == self._add_count: + continue + # Add B --> .G to J + J.append(x.lr_next) + x.lr0_added = self._add_count + didadd = True + + return J + + # Compute the LR(0) goto function goto(I,X) where I is a set + # of LR(0) items and X is a grammar symbol. This function is written + # in a way that guarantees uniqueness of the generated goto sets + # (i.e. the same goto set will never be returned as two different Python + # objects). With uniqueness, we can later do fast set comparisons using + # id(obj) instead of element-wise comparison. + + def lr0_goto(self, I, x): + # First we look for a previously cached entry + g = self.lr_goto_cache.get((id(I), x)) + if g: + return g + + # Now we generate the goto set in a way that guarantees uniqueness + # of the result + + s = self.lr_goto_cache.get(x) + if not s: + s = {} + self.lr_goto_cache[x] = s + + gs = [] + for p in I: + n = p.lr_next + if n and n.lr_before == x: + s1 = s.get(id(n)) + if not s1: + s1 = {} + s[id(n)] = s1 + gs.append(n) + s = s1 + g = s.get('$end') + if not g: + if gs: + g = self.lr0_closure(gs) + s['$end'] = g + else: + s['$end'] = gs + self.lr_goto_cache[(id(I), x)] = g + return g + + # Compute the LR(0) sets of item function + def lr0_items(self): + C = [self.lr0_closure([self.grammar.Productions[0].lr_next])] + i = 0 + for I in C: + self.lr0_cidhash[id(I)] = i + i += 1 + + # Loop over the items in C and each grammar symbols + i = 0 + while i < len(C): + I = C[i] + i += 1 + + # Collect all of the symbols that could possibly be in the goto(I,X) sets + asyms = {} + for ii in I: + for s in ii.usyms: + asyms[s] = None + + for x in asyms: + g = self.lr0_goto(I, x) + if not g or id(g) in self.lr0_cidhash: + continue + self.lr0_cidhash[id(g)] = len(C) + C.append(g) + + return C + + # ----------------------------------------------------------------------------- + # ==== LALR(1) Parsing ==== + # + # LALR(1) parsing is almost exactly the same as SLR except that instead of + # relying upon Follow() sets when performing reductions, a more selective + # lookahead set that incorporates the state of the LR(0) machine is utilized. + # Thus, we mainly just have to focus on calculating the lookahead sets. + # + # The method used here is due to DeRemer and Pennelo (1982). + # + # DeRemer, F. L., and T. J. Pennelo: "Efficient Computation of LALR(1) + # Lookahead Sets", ACM Transactions on Programming Languages and Systems, + # Vol. 4, No. 4, Oct. 1982, pp. 615-649 + # + # Further details can also be found in: + # + # J. Tremblay and P. Sorenson, "The Theory and Practice of Compiler Writing", + # McGraw-Hill Book Company, (1985). + # + # ----------------------------------------------------------------------------- + + # ----------------------------------------------------------------------------- + # compute_nullable_nonterminals() + # + # Creates a dictionary containing all of the non-terminals that might produce + # an empty production. + # ----------------------------------------------------------------------------- + + def compute_nullable_nonterminals(self): + nullable = set() + num_nullable = 0 + while True: + for p in self.grammar.Productions[1:]: + if p.len == 0: + nullable.add(p.name) + continue + for t in p.prod: + if t not in nullable: + break + else: + nullable.add(p.name) + if len(nullable) == num_nullable: + break + num_nullable = len(nullable) + return nullable + + # ----------------------------------------------------------------------------- + # find_nonterminal_trans(C) + # + # Given a set of LR(0) items, this functions finds all of the non-terminal + # transitions. These are transitions in which a dot appears immediately before + # a non-terminal. Returns a list of tuples of the form (state,N) where state + # is the state number and N is the nonterminal symbol. + # + # The input C is the set of LR(0) items. + # ----------------------------------------------------------------------------- + + def find_nonterminal_transitions(self, C): + trans = [] + for stateno, state in enumerate(C): + for p in state: + if p.lr_index < p.len - 1: + t = (stateno, p.prod[p.lr_index+1]) + if t[1] in self.grammar.Nonterminals: + if t not in trans: + trans.append(t) + return trans + + # ----------------------------------------------------------------------------- + # dr_relation() + # + # Computes the DR(p,A) relationships for non-terminal transitions. The input + # is a tuple (state,N) where state is a number and N is a nonterminal symbol. + # + # Returns a list of terminals. + # ----------------------------------------------------------------------------- + + def dr_relation(self, C, trans, nullable): + dr_set = {} + state, N = trans + terms = [] + + g = self.lr0_goto(C[state], N) + for p in g: + if p.lr_index < p.len - 1: + a = p.prod[p.lr_index+1] + if a in self.grammar.Terminals: + if a not in terms: + terms.append(a) + + # This extra bit is to handle the start state + if state == 0 and N == self.grammar.Productions[0].prod[0]: + terms.append('$end') + + return terms + + # ----------------------------------------------------------------------------- + # reads_relation() + # + # Computes the READS() relation (p,A) READS (t,C). + # ----------------------------------------------------------------------------- + + def reads_relation(self, C, trans, empty): + # Look for empty transitions + rel = [] + state, N = trans + + g = self.lr0_goto(C[state], N) + j = self.lr0_cidhash.get(id(g), -1) + for p in g: + if p.lr_index < p.len - 1: + a = p.prod[p.lr_index + 1] + if a in empty: + rel.append((j, a)) + + return rel + + # ----------------------------------------------------------------------------- + # compute_lookback_includes() + # + # Determines the lookback and includes relations + # + # LOOKBACK: + # + # This relation is determined by running the LR(0) state machine forward. + # For example, starting with a production "N : . A B C", we run it forward + # to obtain "N : A B C ." We then build a relationship between this final + # state and the starting state. These relationships are stored in a dictionary + # lookdict. + # + # INCLUDES: + # + # Computes the INCLUDE() relation (p,A) INCLUDES (p',B). + # + # This relation is used to determine non-terminal transitions that occur + # inside of other non-terminal transition states. (p,A) INCLUDES (p', B) + # if the following holds: + # + # B -> LAT, where T -> epsilon and p' -L-> p + # + # L is essentially a prefix (which may be empty), T is a suffix that must be + # able to derive an empty string. State p' must lead to state p with the string L. + # + # ----------------------------------------------------------------------------- + + def compute_lookback_includes(self, C, trans, nullable): + lookdict = {} # Dictionary of lookback relations + includedict = {} # Dictionary of include relations + + # Make a dictionary of non-terminal transitions + dtrans = {} + for t in trans: + dtrans[t] = 1 + + # Loop over all transitions and compute lookbacks and includes + for state, N in trans: + lookb = [] + includes = [] + for p in C[state]: + if p.name != N: + continue + + # Okay, we have a name match. We now follow the production all the way + # through the state machine until we get the . on the right hand side + + lr_index = p.lr_index + j = state + while lr_index < p.len - 1: + lr_index = lr_index + 1 + t = p.prod[lr_index] + + # Check to see if this symbol and state are a non-terminal transition + if (j, t) in dtrans: + # Yes. Okay, there is some chance that this is an includes relation + # the only way to know for certain is whether the rest of the + # production derives empty + + li = lr_index + 1 + while li < p.len: + if p.prod[li] in self.grammar.Terminals: + break # No forget it + if p.prod[li] not in nullable: + break + li = li + 1 + else: + # Appears to be a relation between (j,t) and (state,N) + includes.append((j, t)) + + g = self.lr0_goto(C[j], t) # Go to next set + j = self.lr0_cidhash.get(id(g), -1) # Go to next state + + # When we get here, j is the final state, now we have to locate the production + for r in C[j]: + if r.name != p.name: + continue + if r.len != p.len: + continue + i = 0 + # This look is comparing a production ". A B C" with "A B C ." + while i < r.lr_index: + if r.prod[i] != p.prod[i+1]: + break + i = i + 1 + else: + lookb.append((j, r)) + for i in includes: + if i not in includedict: + includedict[i] = [] + includedict[i].append((state, N)) + lookdict[(state, N)] = lookb + + return lookdict, includedict + + # ----------------------------------------------------------------------------- + # compute_read_sets() + # + # Given a set of LR(0) items, this function computes the read sets. + # + # Inputs: C = Set of LR(0) items + # ntrans = Set of nonterminal transitions + # nullable = Set of empty transitions + # + # Returns a set containing the read sets + # ----------------------------------------------------------------------------- + + def compute_read_sets(self, C, ntrans, nullable): + FP = lambda x: self.dr_relation(C, x, nullable) + R = lambda x: self.reads_relation(C, x, nullable) + F = digraph(ntrans, R, FP) + return F + + # ----------------------------------------------------------------------------- + # compute_follow_sets() + # + # Given a set of LR(0) items, a set of non-terminal transitions, a readset, + # and an include set, this function computes the follow sets + # + # Follow(p,A) = Read(p,A) U U {Follow(p',B) | (p,A) INCLUDES (p',B)} + # + # Inputs: + # ntrans = Set of nonterminal transitions + # readsets = Readset (previously computed) + # inclsets = Include sets (previously computed) + # + # Returns a set containing the follow sets + # ----------------------------------------------------------------------------- + + def compute_follow_sets(self, ntrans, readsets, inclsets): + FP = lambda x: readsets[x] + R = lambda x: inclsets.get(x, []) + F = digraph(ntrans, R, FP) + return F + + # ----------------------------------------------------------------------------- + # add_lookaheads() + # + # Attaches the lookahead symbols to grammar rules. + # + # Inputs: lookbacks - Set of lookback relations + # followset - Computed follow set + # + # This function directly attaches the lookaheads to productions contained + # in the lookbacks set + # ----------------------------------------------------------------------------- + + def add_lookaheads(self, lookbacks, followset): + for trans, lb in lookbacks.items(): + # Loop over productions in lookback + for state, p in lb: + if state not in p.lookaheads: + p.lookaheads[state] = [] + f = followset.get(trans, []) + for a in f: + if a not in p.lookaheads[state]: + p.lookaheads[state].append(a) + + # ----------------------------------------------------------------------------- + # add_lalr_lookaheads() + # + # This function does all of the work of adding lookahead information for use + # with LALR parsing + # ----------------------------------------------------------------------------- + + def add_lalr_lookaheads(self, C): + # Determine all of the nullable nonterminals + nullable = self.compute_nullable_nonterminals() + + # Find all non-terminal transitions + trans = self.find_nonterminal_transitions(C) + + # Compute read sets + readsets = self.compute_read_sets(C, trans, nullable) + + # Compute lookback/includes relations + lookd, included = self.compute_lookback_includes(C, trans, nullable) + + # Compute LALR FOLLOW sets + followsets = self.compute_follow_sets(trans, readsets, included) + + # Add all of the lookaheads + self.add_lookaheads(lookd, followsets) + + # ----------------------------------------------------------------------------- + # lr_parse_table() + # + # This function constructs the parse tables for SLR or LALR + # ----------------------------------------------------------------------------- + def lr_parse_table(self): + Productions = self.grammar.Productions + Precedence = self.grammar.Precedence + goto = self.lr_goto # Goto array + action = self.lr_action # Action array + log = self.log # Logger for output + + actionp = {} # Action production array (temporary) + + log.info('Parsing method: %s', self.lr_method) + + # Step 1: Construct C = { I0, I1, ... IN}, collection of LR(0) items + # This determines the number of states + + C = self.lr0_items() + + if self.lr_method == 'LALR': + self.add_lalr_lookaheads(C) + + # Build the parser table, state by state + st = 0 + for I in C: + # Loop over each production in I + actlist = [] # List of actions + st_action = {} + st_actionp = {} + st_goto = {} + log.info('') + log.info('state %d', st) + log.info('') + for p in I: + log.info(' (%d) %s', p.number, p) + log.info('') + + for p in I: + if p.len == p.lr_index + 1: + if p.name == "S'": + # Start symbol. Accept! + st_action['$end'] = 0 + st_actionp['$end'] = p + else: + # We are at the end of a production. Reduce! + if self.lr_method == 'LALR': + laheads = p.lookaheads[st] + else: + laheads = self.grammar.Follow[p.name] + for a in laheads: + actlist.append((a, p, 'reduce using rule %d (%s)' % (p.number, p))) + r = st_action.get(a) + if r is not None: + # Whoa. Have a shift/reduce or reduce/reduce conflict + if r > 0: + # Need to decide on shift or reduce here + # By default we favor shifting. Need to add + # some precedence rules here. + + # Shift precedence comes from the token + sprec, slevel = Precedence.get(a, ('right', 0)) + + # Reduce precedence comes from rule being reduced (p) + rprec, rlevel = Productions[p.number].prec + + if (slevel < rlevel) or ((slevel == rlevel) and (rprec == 'left')): + # We really need to reduce here. + st_action[a] = -p.number + st_actionp[a] = p + if not slevel and not rlevel: + log.info(' ! shift/reduce conflict for %s resolved as reduce', a) + self.sr_conflicts.append((st, a, 'reduce')) + Productions[p.number].reduced += 1 + elif (slevel == rlevel) and (rprec == 'nonassoc'): + st_action[a] = None + else: + # Hmmm. Guess we'll keep the shift + if not rlevel: + log.info(' ! shift/reduce conflict for %s resolved as shift', a) + self.sr_conflicts.append((st, a, 'shift')) + elif r < 0: + # Reduce/reduce conflict. In this case, we favor the rule + # that was defined first in the grammar file + oldp = Productions[-r] + pp = Productions[p.number] + if oldp.line > pp.line: + st_action[a] = -p.number + st_actionp[a] = p + chosenp, rejectp = pp, oldp + Productions[p.number].reduced += 1 + Productions[oldp.number].reduced -= 1 + else: + chosenp, rejectp = oldp, pp + self.rr_conflicts.append((st, chosenp, rejectp)) + log.info(' ! reduce/reduce conflict for %s resolved using rule %d (%s)', + a, st_actionp[a].number, st_actionp[a]) + else: + raise LALRError('Unknown conflict in state %d' % st) + else: + st_action[a] = -p.number + st_actionp[a] = p + Productions[p.number].reduced += 1 + else: + i = p.lr_index + a = p.prod[i+1] # Get symbol right after the "." + if a in self.grammar.Terminals: + g = self.lr0_goto(I, a) + j = self.lr0_cidhash.get(id(g), -1) + if j >= 0: + # We are in a shift state + actlist.append((a, p, 'shift and go to state %d' % j)) + r = st_action.get(a) + if r is not None: + # Whoa have a shift/reduce or shift/shift conflict + if r > 0: + if r != j: + raise LALRError('Shift/shift conflict in state %d' % st) + elif r < 0: + # Do a precedence check. + # - if precedence of reduce rule is higher, we reduce. + # - if precedence of reduce is same and left assoc, we reduce. + # - otherwise we shift + + # Shift precedence comes from the token + sprec, slevel = Precedence.get(a, ('right', 0)) + + # Reduce precedence comes from the rule that could have been reduced + rprec, rlevel = Productions[st_actionp[a].number].prec + + if (slevel > rlevel) or ((slevel == rlevel) and (rprec == 'right')): + # We decide to shift here... highest precedence to shift + Productions[st_actionp[a].number].reduced -= 1 + st_action[a] = j + st_actionp[a] = p + if not rlevel: + log.info(' ! shift/reduce conflict for %s resolved as shift', a) + self.sr_conflicts.append((st, a, 'shift')) + elif (slevel == rlevel) and (rprec == 'nonassoc'): + st_action[a] = None + else: + # Hmmm. Guess we'll keep the reduce + if not slevel and not rlevel: + log.info(' ! shift/reduce conflict for %s resolved as reduce', a) + self.sr_conflicts.append((st, a, 'reduce')) + + else: + raise LALRError('Unknown conflict in state %d' % st) + else: + st_action[a] = j + st_actionp[a] = p + + # Print the actions associated with each terminal + _actprint = {} + for a, p, m in actlist: + if a in st_action: + if p is st_actionp[a]: + log.info(' %-15s %s', a, m) + _actprint[(a, m)] = 1 + log.info('') + # Print the actions that were not used. (debugging) + not_used = 0 + for a, p, m in actlist: + if a in st_action: + if p is not st_actionp[a]: + if not (a, m) in _actprint: + log.debug(' ! %-15s [ %s ]', a, m) + not_used = 1 + _actprint[(a, m)] = 1 + if not_used: + log.debug('') + + # Construct the goto table for this state + + nkeys = {} + for ii in I: + for s in ii.usyms: + if s in self.grammar.Nonterminals: + nkeys[s] = None + for n in nkeys: + g = self.lr0_goto(I, n) + j = self.lr0_cidhash.get(id(g), -1) + if j >= 0: + st_goto[n] = j + log.info(' %-30s shift and go to state %d', n, j) + + action[st] = st_action + actionp[st] = st_actionp + goto[st] = st_goto + st += 1 + + # ----------------------------------------------------------------------------- + # write() + # + # This function writes the LR parsing tables to a file + # ----------------------------------------------------------------------------- + + def write_table(self, tabmodule, outputdir='', signature=''): + if isinstance(tabmodule, types.ModuleType): + raise IOError("Won't overwrite existing tabmodule") + + basemodulename = tabmodule.split('.')[-1] + filename = os.path.join(outputdir, basemodulename) + '.py' + try: + f = open(filename, 'w') + + f.write(''' +# %s +# This file is automatically generated. Do not edit. +_tabversion = %r + +_lr_method = %r + +_lr_signature = %r + ''' % (os.path.basename(filename), __tabversion__, self.lr_method, signature)) + + # Change smaller to 0 to go back to original tables + smaller = 1 + + # Factor out names to try and make smaller + if smaller: + items = {} + + for s, nd in self.lr_action.items(): + for name, v in nd.items(): + i = items.get(name) + if not i: + i = ([], []) + items[name] = i + i[0].append(s) + i[1].append(v) + + f.write('\n_lr_action_items = {') + for k, v in items.items(): + f.write('%r:([' % k) + for i in v[0]: + f.write('%r,' % i) + f.write('],[') + for i in v[1]: + f.write('%r,' % i) + + f.write(']),') + f.write('}\n') + + f.write(''' +_lr_action = {} +for _k, _v in _lr_action_items.items(): + for _x,_y in zip(_v[0],_v[1]): + if not _x in _lr_action: _lr_action[_x] = {} + _lr_action[_x][_k] = _y +del _lr_action_items +''') + + else: + f.write('\n_lr_action = { ') + for k, v in self.lr_action.items(): + f.write('(%r,%r):%r,' % (k[0], k[1], v)) + f.write('}\n') + + if smaller: + # Factor out names to try and make smaller + items = {} + + for s, nd in self.lr_goto.items(): + for name, v in nd.items(): + i = items.get(name) + if not i: + i = ([], []) + items[name] = i + i[0].append(s) + i[1].append(v) + + f.write('\n_lr_goto_items = {') + for k, v in items.items(): + f.write('%r:([' % k) + for i in v[0]: + f.write('%r,' % i) + f.write('],[') + for i in v[1]: + f.write('%r,' % i) + + f.write(']),') + f.write('}\n') + + f.write(''' +_lr_goto = {} +for _k, _v in _lr_goto_items.items(): + for _x, _y in zip(_v[0], _v[1]): + if not _x in _lr_goto: _lr_goto[_x] = {} + _lr_goto[_x][_k] = _y +del _lr_goto_items +''') + else: + f.write('\n_lr_goto = { ') + for k, v in self.lr_goto.items(): + f.write('(%r,%r):%r,' % (k[0], k[1], v)) + f.write('}\n') + + # Write production table + f.write('_lr_productions = [\n') + for p in self.lr_productions: + if p.func: + f.write(' (%r,%r,%d,%r,%r,%d),\n' % (p.str, p.name, p.len, + p.func, os.path.basename(p.file), p.line)) + else: + f.write(' (%r,%r,%d,None,None,None),\n' % (str(p), p.name, p.len)) + f.write(']\n') + f.close() + + except IOError as e: + raise + + + # ----------------------------------------------------------------------------- + # pickle_table() + # + # This function pickles the LR parsing tables to a supplied file object + # ----------------------------------------------------------------------------- + + def pickle_table(self, filename, signature=''): + try: + import cPickle as pickle + except ImportError: + import pickle + with open(filename, 'wb') as outf: + pickle.dump(__tabversion__, outf, pickle_protocol) + pickle.dump(self.lr_method, outf, pickle_protocol) + pickle.dump(signature, outf, pickle_protocol) + pickle.dump(self.lr_action, outf, pickle_protocol) + pickle.dump(self.lr_goto, outf, pickle_protocol) + + outp = [] + for p in self.lr_productions: + if p.func: + outp.append((p.str, p.name, p.len, p.func, os.path.basename(p.file), p.line)) + else: + outp.append((str(p), p.name, p.len, None, None, None)) + pickle.dump(outp, outf, pickle_protocol) + +# ----------------------------------------------------------------------------- +# === INTROSPECTION === +# +# The following functions and classes are used to implement the PLY +# introspection features followed by the yacc() function itself. +# ----------------------------------------------------------------------------- + +# ----------------------------------------------------------------------------- +# get_caller_module_dict() +# +# This function returns a dictionary containing all of the symbols defined within +# a caller further down the call stack. This is used to get the environment +# associated with the yacc() call if none was provided. +# ----------------------------------------------------------------------------- + +def get_caller_module_dict(levels): + f = sys._getframe(levels) + ldict = f.f_globals.copy() + if f.f_globals != f.f_locals: + ldict.update(f.f_locals) + return ldict + +# ----------------------------------------------------------------------------- +# parse_grammar() +# +# This takes a raw grammar rule string and parses it into production data +# ----------------------------------------------------------------------------- +def parse_grammar(doc, file, line): + grammar = [] + # Split the doc string into lines + pstrings = doc.splitlines() + lastp = None + dline = line + for ps in pstrings: + dline += 1 + p = ps.split() + if not p: + continue + try: + if p[0] == '|': + # This is a continuation of a previous rule + if not lastp: + raise SyntaxError("%s:%d: Misplaced '|'" % (file, dline)) + prodname = lastp + syms = p[1:] + else: + prodname = p[0] + lastp = prodname + syms = p[2:] + assign = p[1] + if assign != ':' and assign != '::=': + raise SyntaxError("%s:%d: Syntax error. Expected ':'" % (file, dline)) + + grammar.append((file, dline, prodname, syms)) + except SyntaxError: + raise + except Exception: + raise SyntaxError('%s:%d: Syntax error in rule %r' % (file, dline, ps.strip())) + + return grammar + +# ----------------------------------------------------------------------------- +# ParserReflect() +# +# This class represents information extracted for building a parser including +# start symbol, error function, tokens, precedence list, action functions, +# etc. +# ----------------------------------------------------------------------------- +class ParserReflect(object): + def __init__(self, pdict, log=None): + self.pdict = pdict + self.start = None + self.error_func = None + self.tokens = None + self.modules = set() + self.grammar = [] + self.error = False + + if log is None: + self.log = PlyLogger(sys.stderr) + else: + self.log = log + + # Get all of the basic information + def get_all(self): + self.get_start() + self.get_error_func() + self.get_tokens() + self.get_precedence() + self.get_pfunctions() + + # Validate all of the information + def validate_all(self): + self.validate_start() + self.validate_error_func() + self.validate_tokens() + self.validate_precedence() + self.validate_pfunctions() + self.validate_modules() + return self.error + + # Compute a signature over the grammar + def signature(self): + parts = [] + try: + if self.start: + parts.append(self.start) + if self.prec: + parts.append(''.join([''.join(p) for p in self.prec])) + if self.tokens: + parts.append(' '.join(self.tokens)) + for f in self.pfuncs: + if f[3]: + parts.append(f[3]) + except (TypeError, ValueError): + pass + return ''.join(parts) + + # ----------------------------------------------------------------------------- + # validate_modules() + # + # This method checks to see if there are duplicated p_rulename() functions + # in the parser module file. Without this function, it is really easy for + # users to make mistakes by cutting and pasting code fragments (and it's a real + # bugger to try and figure out why the resulting parser doesn't work). Therefore, + # we just do a little regular expression pattern matching of def statements + # to try and detect duplicates. + # ----------------------------------------------------------------------------- + + def validate_modules(self): + # Match def p_funcname( + fre = re.compile(r'\s*def\s+(p_[a-zA-Z_0-9]*)\(') + + for module in self.modules: + try: + lines, linen = inspect.getsourcelines(module) + except IOError: + continue + + counthash = {} + for linen, line in enumerate(lines): + linen += 1 + m = fre.match(line) + if m: + name = m.group(1) + prev = counthash.get(name) + if not prev: + counthash[name] = linen + else: + filename = inspect.getsourcefile(module) + self.log.warning('%s:%d: Function %s redefined. Previously defined on line %d', + filename, linen, name, prev) + + # Get the start symbol + def get_start(self): + self.start = self.pdict.get('start') + + # Validate the start symbol + def validate_start(self): + if self.start is not None: + if not isinstance(self.start, string_types): + self.log.error("'start' must be a string") + + # Look for error handler + def get_error_func(self): + self.error_func = self.pdict.get('p_error') + + # Validate the error function + def validate_error_func(self): + if self.error_func: + if isinstance(self.error_func, types.FunctionType): + ismethod = 0 + elif isinstance(self.error_func, types.MethodType): + ismethod = 1 + else: + self.log.error("'p_error' defined, but is not a function or method") + self.error = True + return + + eline = self.error_func.__code__.co_firstlineno + efile = self.error_func.__code__.co_filename + module = inspect.getmodule(self.error_func) + self.modules.add(module) + + argcount = self.error_func.__code__.co_argcount - ismethod + if argcount != 1: + self.log.error('%s:%d: p_error() requires 1 argument', efile, eline) + self.error = True + + # Get the tokens map + def get_tokens(self): + tokens = self.pdict.get('tokens') + if not tokens: + self.log.error('No token list is defined') + self.error = True + return + + if not isinstance(tokens, (list, tuple)): + self.log.error('tokens must be a list or tuple') + self.error = True + return + + if not tokens: + self.log.error('tokens is empty') + self.error = True + return + + self.tokens = tokens + + # Validate the tokens + def validate_tokens(self): + # Validate the tokens. + if 'error' in self.tokens: + self.log.error("Illegal token name 'error'. Is a reserved word") + self.error = True + return + + terminals = set() + for n in self.tokens: + if n in terminals: + self.log.warning('Token %r multiply defined', n) + terminals.add(n) + + # Get the precedence map (if any) + def get_precedence(self): + self.prec = self.pdict.get('precedence') + + # Validate and parse the precedence map + def validate_precedence(self): + preclist = [] + if self.prec: + if not isinstance(self.prec, (list, tuple)): + self.log.error('precedence must be a list or tuple') + self.error = True + return + for level, p in enumerate(self.prec): + if not isinstance(p, (list, tuple)): + self.log.error('Bad precedence table') + self.error = True + return + + if len(p) < 2: + self.log.error('Malformed precedence entry %s. Must be (assoc, term, ..., term)', p) + self.error = True + return + assoc = p[0] + if not isinstance(assoc, string_types): + self.log.error('precedence associativity must be a string') + self.error = True + return + for term in p[1:]: + if not isinstance(term, string_types): + self.log.error('precedence items must be strings') + self.error = True + return + preclist.append((term, assoc, level+1)) + self.preclist = preclist + + # Get all p_functions from the grammar + def get_pfunctions(self): + p_functions = [] + for name, item in self.pdict.items(): + if not name.startswith('p_') or name == 'p_error': + continue + if isinstance(item, (types.FunctionType, types.MethodType)): + line = getattr(item, 'co_firstlineno', item.__code__.co_firstlineno) + module = inspect.getmodule(item) + p_functions.append((line, module, name, item.__doc__)) + + # Sort all of the actions by line number; make sure to stringify + # modules to make them sortable, since `line` may not uniquely sort all + # p functions + p_functions.sort(key=lambda p_function: ( + p_function[0], + str(p_function[1]), + p_function[2], + p_function[3])) + self.pfuncs = p_functions + + # Validate all of the p_functions + def validate_pfunctions(self): + grammar = [] + # Check for non-empty symbols + if len(self.pfuncs) == 0: + self.log.error('no rules of the form p_rulename are defined') + self.error = True + return + + for line, module, name, doc in self.pfuncs: + file = inspect.getsourcefile(module) + func = self.pdict[name] + if isinstance(func, types.MethodType): + reqargs = 2 + else: + reqargs = 1 + if func.__code__.co_argcount > reqargs: + self.log.error('%s:%d: Rule %r has too many arguments', file, line, func.__name__) + self.error = True + elif func.__code__.co_argcount < reqargs: + self.log.error('%s:%d: Rule %r requires an argument', file, line, func.__name__) + self.error = True + elif not func.__doc__: + self.log.warning('%s:%d: No documentation string specified in function %r (ignored)', + file, line, func.__name__) + else: + try: + parsed_g = parse_grammar(doc, file, line) + for g in parsed_g: + grammar.append((name, g)) + except SyntaxError as e: + self.log.error(str(e)) + self.error = True + + # Looks like a valid grammar rule + # Mark the file in which defined. + self.modules.add(module) + + # Secondary validation step that looks for p_ definitions that are not functions + # or functions that look like they might be grammar rules. + + for n, v in self.pdict.items(): + if n.startswith('p_') and isinstance(v, (types.FunctionType, types.MethodType)): + continue + if n.startswith('t_'): + continue + if n.startswith('p_') and n != 'p_error': + self.log.warning('%r not defined as a function', n) + if ((isinstance(v, types.FunctionType) and v.__code__.co_argcount == 1) or + (isinstance(v, types.MethodType) and v.__func__.__code__.co_argcount == 2)): + if v.__doc__: + try: + doc = v.__doc__.split(' ') + if doc[1] == ':': + self.log.warning('%s:%d: Possible grammar rule %r defined without p_ prefix', + v.__code__.co_filename, v.__code__.co_firstlineno, n) + except IndexError: + pass + + self.grammar = grammar + +# ----------------------------------------------------------------------------- +# yacc(module) +# +# Build a parser +# ----------------------------------------------------------------------------- + +def yacc(method='LALR', debug=yaccdebug, module=None, tabmodule=tab_module, start=None, + check_recursion=True, optimize=False, write_tables=True, debugfile=debug_file, + outputdir=None, debuglog=None, errorlog=None, picklefile=None): + + if tabmodule is None: + tabmodule = tab_module + + # Reference to the parsing method of the last built parser + global parse + + # If pickling is enabled, table files are not created + if picklefile: + write_tables = 0 + + if errorlog is None: + errorlog = PlyLogger(sys.stderr) + + # Get the module dictionary used for the parser + if module: + _items = [(k, getattr(module, k)) for k in dir(module)] + pdict = dict(_items) + # If no __file__ attribute is available, try to obtain it from the __module__ instead + if '__file__' not in pdict: + pdict['__file__'] = sys.modules[pdict['__module__']].__file__ + else: + pdict = get_caller_module_dict(2) + + if outputdir is None: + # If no output directory is set, the location of the output files + # is determined according to the following rules: + # - If tabmodule specifies a package, files go into that package directory + # - Otherwise, files go in the same directory as the specifying module + if isinstance(tabmodule, types.ModuleType): + srcfile = tabmodule.__file__ + else: + if '.' not in tabmodule: + srcfile = pdict['__file__'] + else: + parts = tabmodule.split('.') + pkgname = '.'.join(parts[:-1]) + exec('import %s' % pkgname) + srcfile = getattr(sys.modules[pkgname], '__file__', '') + outputdir = os.path.dirname(srcfile) + + # Determine if the module is package of a package or not. + # If so, fix the tabmodule setting so that tables load correctly + pkg = pdict.get('__package__') + if pkg and isinstance(tabmodule, str): + if '.' not in tabmodule: + tabmodule = pkg + '.' + tabmodule + + + + # Set start symbol if it's specified directly using an argument + if start is not None: + pdict['start'] = start + + # Collect parser information from the dictionary + pinfo = ParserReflect(pdict, log=errorlog) + pinfo.get_all() + + if pinfo.error: + raise YaccError('Unable to build parser') + + # Check signature against table files (if any) + signature = pinfo.signature() + + # Read the tables + try: + lr = LRTable() + if picklefile: + read_signature = lr.read_pickle(picklefile) + else: + read_signature = lr.read_table(tabmodule) + if optimize or (read_signature == signature): + try: + lr.bind_callables(pinfo.pdict) + parser = LRParser(lr, pinfo.error_func) + parse = parser.parse + return parser + except Exception as e: + errorlog.warning('There was a problem loading the table file: %r', e) + except VersionError as e: + errorlog.warning(str(e)) + except ImportError: + pass + + if debuglog is None: + if debug: + try: + debuglog = PlyLogger(open(os.path.join(outputdir, debugfile), 'w')) + except IOError as e: + errorlog.warning("Couldn't open %r. %s" % (debugfile, e)) + debuglog = NullLogger() + else: + debuglog = NullLogger() + + debuglog.info('Created by PLY version %s (http://www.dabeaz.com/ply)', __version__) + + errors = False + + # Validate the parser information + if pinfo.validate_all(): + raise YaccError('Unable to build parser') + + if not pinfo.error_func: + errorlog.warning('no p_error() function is defined') + + # Create a grammar object + grammar = Grammar(pinfo.tokens) + + # Set precedence level for terminals + for term, assoc, level in pinfo.preclist: + try: + grammar.set_precedence(term, assoc, level) + except GrammarError as e: + errorlog.warning('%s', e) + + # Add productions to the grammar + for funcname, gram in pinfo.grammar: + file, line, prodname, syms = gram + try: + grammar.add_production(prodname, syms, funcname, file, line) + except GrammarError as e: + errorlog.error('%s', e) + errors = True + + # Set the grammar start symbols + try: + if start is None: + grammar.set_start(pinfo.start) + else: + grammar.set_start(start) + except GrammarError as e: + errorlog.error(str(e)) + errors = True + + if errors: + raise YaccError('Unable to build parser') + + # Verify the grammar structure + undefined_symbols = grammar.undefined_symbols() + for sym, prod in undefined_symbols: + errorlog.error('%s:%d: Symbol %r used, but not defined as a token or a rule', prod.file, prod.line, sym) + errors = True + + unused_terminals = grammar.unused_terminals() + if unused_terminals: + debuglog.info('') + debuglog.info('Unused terminals:') + debuglog.info('') + for term in unused_terminals: + errorlog.warning('Token %r defined, but not used', term) + debuglog.info(' %s', term) + + # Print out all productions to the debug log + if debug: + debuglog.info('') + debuglog.info('Grammar') + debuglog.info('') + for n, p in enumerate(grammar.Productions): + debuglog.info('Rule %-5d %s', n, p) + + # Find unused non-terminals + unused_rules = grammar.unused_rules() + for prod in unused_rules: + errorlog.warning('%s:%d: Rule %r defined, but not used', prod.file, prod.line, prod.name) + + if len(unused_terminals) == 1: + errorlog.warning('There is 1 unused token') + if len(unused_terminals) > 1: + errorlog.warning('There are %d unused tokens', len(unused_terminals)) + + if len(unused_rules) == 1: + errorlog.warning('There is 1 unused rule') + if len(unused_rules) > 1: + errorlog.warning('There are %d unused rules', len(unused_rules)) + + if debug: + debuglog.info('') + debuglog.info('Terminals, with rules where they appear') + debuglog.info('') + terms = list(grammar.Terminals) + terms.sort() + for term in terms: + debuglog.info('%-20s : %s', term, ' '.join([str(s) for s in grammar.Terminals[term]])) + + debuglog.info('') + debuglog.info('Nonterminals, with rules where they appear') + debuglog.info('') + nonterms = list(grammar.Nonterminals) + nonterms.sort() + for nonterm in nonterms: + debuglog.info('%-20s : %s', nonterm, ' '.join([str(s) for s in grammar.Nonterminals[nonterm]])) + debuglog.info('') + + if check_recursion: + unreachable = grammar.find_unreachable() + for u in unreachable: + errorlog.warning('Symbol %r is unreachable', u) + + infinite = grammar.infinite_cycles() + for inf in infinite: + errorlog.error('Infinite recursion detected for symbol %r', inf) + errors = True + + unused_prec = grammar.unused_precedence() + for term, assoc in unused_prec: + errorlog.error('Precedence rule %r defined for unknown symbol %r', assoc, term) + errors = True + + if errors: + raise YaccError('Unable to build parser') + + # Run the LRGeneratedTable on the grammar + if debug: + errorlog.debug('Generating %s tables', method) + + lr = LRGeneratedTable(grammar, method, debuglog) + + if debug: + num_sr = len(lr.sr_conflicts) + + # Report shift/reduce and reduce/reduce conflicts + if num_sr == 1: + errorlog.warning('1 shift/reduce conflict') + elif num_sr > 1: + errorlog.warning('%d shift/reduce conflicts', num_sr) + + num_rr = len(lr.rr_conflicts) + if num_rr == 1: + errorlog.warning('1 reduce/reduce conflict') + elif num_rr > 1: + errorlog.warning('%d reduce/reduce conflicts', num_rr) + + # Write out conflicts to the output file + if debug and (lr.sr_conflicts or lr.rr_conflicts): + debuglog.warning('') + debuglog.warning('Conflicts:') + debuglog.warning('') + + for state, tok, resolution in lr.sr_conflicts: + debuglog.warning('shift/reduce conflict for %s in state %d resolved as %s', tok, state, resolution) + + already_reported = set() + for state, rule, rejected in lr.rr_conflicts: + if (state, id(rule), id(rejected)) in already_reported: + continue + debuglog.warning('reduce/reduce conflict in state %d resolved using rule (%s)', state, rule) + debuglog.warning('rejected rule (%s) in state %d', rejected, state) + errorlog.warning('reduce/reduce conflict in state %d resolved using rule (%s)', state, rule) + errorlog.warning('rejected rule (%s) in state %d', rejected, state) + already_reported.add((state, id(rule), id(rejected))) + + warned_never = [] + for state, rule, rejected in lr.rr_conflicts: + if not rejected.reduced and (rejected not in warned_never): + debuglog.warning('Rule (%s) is never reduced', rejected) + errorlog.warning('Rule (%s) is never reduced', rejected) + warned_never.append(rejected) + + # Write the table file if requested + if write_tables: + try: + lr.write_table(tabmodule, outputdir, signature) + except IOError as e: + errorlog.warning("Couldn't create %r. %s" % (tabmodule, e)) + + # Write a pickled version of the tables + if picklefile: + try: + lr.pickle_table(picklefile, signature) + except IOError as e: + errorlog.warning("Couldn't create %r. %s" % (picklefile, e)) + + # Build the parser + lr.bind_callables(pinfo.pdict) + parser = LRParser(lr, pinfo.error_func) + + parse = parser.parse + return parser diff --git a/templates/skills/file_manager/dependencies/pycparser/ply/ygen.py b/templates/skills/file_manager/dependencies/pycparser/ply/ygen.py new file mode 100644 index 00000000..acf5ca1a --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/ply/ygen.py @@ -0,0 +1,74 @@ +# ply: ygen.py +# +# This is a support program that auto-generates different versions of the YACC parsing +# function with different features removed for the purposes of performance. +# +# Users should edit the method LParser.parsedebug() in yacc.py. The source code +# for that method is then used to create the other methods. See the comments in +# yacc.py for further details. + +import os.path +import shutil + +def get_source_range(lines, tag): + srclines = enumerate(lines) + start_tag = '#--! %s-start' % tag + end_tag = '#--! %s-end' % tag + + for start_index, line in srclines: + if line.strip().startswith(start_tag): + break + + for end_index, line in srclines: + if line.strip().endswith(end_tag): + break + + return (start_index + 1, end_index) + +def filter_section(lines, tag): + filtered_lines = [] + include = True + tag_text = '#--! %s' % tag + for line in lines: + if line.strip().startswith(tag_text): + include = not include + elif include: + filtered_lines.append(line) + return filtered_lines + +def main(): + dirname = os.path.dirname(__file__) + shutil.copy2(os.path.join(dirname, 'yacc.py'), os.path.join(dirname, 'yacc.py.bak')) + with open(os.path.join(dirname, 'yacc.py'), 'r') as f: + lines = f.readlines() + + parse_start, parse_end = get_source_range(lines, 'parsedebug') + parseopt_start, parseopt_end = get_source_range(lines, 'parseopt') + parseopt_notrack_start, parseopt_notrack_end = get_source_range(lines, 'parseopt-notrack') + + # Get the original source + orig_lines = lines[parse_start:parse_end] + + # Filter the DEBUG sections out + parseopt_lines = filter_section(orig_lines, 'DEBUG') + + # Filter the TRACKING sections out + parseopt_notrack_lines = filter_section(parseopt_lines, 'TRACKING') + + # Replace the parser source sections with updated versions + lines[parseopt_notrack_start:parseopt_notrack_end] = parseopt_notrack_lines + lines[parseopt_start:parseopt_end] = parseopt_lines + + lines = [line.rstrip()+'\n' for line in lines] + with open(os.path.join(dirname, 'yacc.py'), 'w') as f: + f.writelines(lines) + + print('Updated yacc.py') + +if __name__ == '__main__': + main() + + + + + diff --git a/templates/skills/file_manager/dependencies/pycparser/plyparser.py b/templates/skills/file_manager/dependencies/pycparser/plyparser.py new file mode 100644 index 00000000..b8f4c439 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/plyparser.py @@ -0,0 +1,133 @@ +#----------------------------------------------------------------- +# plyparser.py +# +# PLYParser class and other utilities for simplifying programming +# parsers with PLY +# +# Eli Bendersky [https://eli.thegreenplace.net/] +# License: BSD +#----------------------------------------------------------------- + +import warnings + +class Coord(object): + """ Coordinates of a syntactic element. Consists of: + - File name + - Line number + - (optional) column number, for the Lexer + """ + __slots__ = ('file', 'line', 'column', '__weakref__') + def __init__(self, file, line, column=None): + self.file = file + self.line = line + self.column = column + + def __str__(self): + str = "%s:%s" % (self.file, self.line) + if self.column: str += ":%s" % self.column + return str + + +class ParseError(Exception): pass + + +class PLYParser(object): + def _create_opt_rule(self, rulename): + """ Given a rule name, creates an optional ply.yacc rule + for it. The name of the optional rule is + _opt + """ + optname = rulename + '_opt' + + def optrule(self, p): + p[0] = p[1] + + optrule.__doc__ = '%s : empty\n| %s' % (optname, rulename) + optrule.__name__ = 'p_%s' % optname + setattr(self.__class__, optrule.__name__, optrule) + + def _coord(self, lineno, column=None): + return Coord( + file=self.clex.filename, + line=lineno, + column=column) + + def _token_coord(self, p, token_idx): + """ Returns the coordinates for the YaccProduction object 'p' indexed + with 'token_idx'. The coordinate includes the 'lineno' and + 'column'. Both follow the lex semantic, starting from 1. + """ + last_cr = p.lexer.lexer.lexdata.rfind('\n', 0, p.lexpos(token_idx)) + if last_cr < 0: + last_cr = -1 + column = (p.lexpos(token_idx) - (last_cr)) + return self._coord(p.lineno(token_idx), column) + + def _parse_error(self, msg, coord): + raise ParseError("%s: %s" % (coord, msg)) + + +def parameterized(*params): + """ Decorator to create parameterized rules. + + Parameterized rule methods must be named starting with 'p_' and contain + 'xxx', and their docstrings may contain 'xxx' and 'yyy'. These will be + replaced by the given parameter tuples. For example, ``p_xxx_rule()`` with + docstring 'xxx_rule : yyy' when decorated with + ``@parameterized(('id', 'ID'))`` produces ``p_id_rule()`` with the docstring + 'id_rule : ID'. Using multiple tuples produces multiple rules. + """ + def decorate(rule_func): + rule_func._params = params + return rule_func + return decorate + + +def template(cls): + """ Class decorator to generate rules from parameterized rule templates. + + See `parameterized` for more information on parameterized rules. + """ + issued_nodoc_warning = False + for attr_name in dir(cls): + if attr_name.startswith('p_'): + method = getattr(cls, attr_name) + if hasattr(method, '_params'): + # Remove the template method + delattr(cls, attr_name) + # Create parameterized rules from this method; only run this if + # the method has a docstring. This is to address an issue when + # pycparser's users are installed in -OO mode which strips + # docstrings away. + # See: https://github.com/eliben/pycparser/pull/198/ and + # https://github.com/eliben/pycparser/issues/197 + # for discussion. + if method.__doc__ is not None: + _create_param_rules(cls, method) + elif not issued_nodoc_warning: + warnings.warn( + 'parsing methods must have __doc__ for pycparser to work properly', + RuntimeWarning, + stacklevel=2) + issued_nodoc_warning = True + return cls + + +def _create_param_rules(cls, func): + """ Create ply.yacc rules based on a parameterized rule function + + Generates new methods (one per each pair of parameters) based on the + template rule function `func`, and attaches them to `cls`. The rule + function's parameters must be accessible via its `_params` attribute. + """ + for xxx, yyy in func._params: + # Use the template method's body for each new method + def param_rule(self, p): + func(self, p) + + # Substitute in the params for the grammar rule and function name + param_rule.__doc__ = func.__doc__.replace('xxx', xxx).replace('yyy', yyy) + param_rule.__name__ = func.__name__.replace('xxx', xxx) + + # Attach the new method to the class + setattr(cls, param_rule.__name__, param_rule) diff --git a/templates/skills/file_manager/dependencies/pycparser/yacctab.py b/templates/skills/file_manager/dependencies/pycparser/yacctab.py new file mode 100644 index 00000000..68b14660 --- /dev/null +++ b/templates/skills/file_manager/dependencies/pycparser/yacctab.py @@ -0,0 +1,369 @@ + +# yacctab.py +# This file is automatically generated. Do not edit. +_tabversion = '3.10' + +_lr_method = 'LALR' + +_lr_signature = 'translation_unit_or_emptyleftLORleftLANDleftORleftXORleftANDleftEQNEleftGTGELTLEleftRSHIFTLSHIFTleftPLUSMINUSleftTIMESDIVIDEMODAUTO BREAK CASE CHAR CONST CONTINUE DEFAULT DO DOUBLE ELSE ENUM EXTERN FLOAT FOR GOTO IF INLINE INT LONG REGISTER OFFSETOF RESTRICT RETURN SHORT SIGNED SIZEOF STATIC STRUCT SWITCH TYPEDEF UNION UNSIGNED VOID VOLATILE WHILE __INT128 _BOOL _COMPLEX _NORETURN _THREAD_LOCAL _STATIC_ASSERT _ATOMIC _ALIGNOF _ALIGNAS _PRAGMA ID TYPEID INT_CONST_DEC INT_CONST_OCT INT_CONST_HEX INT_CONST_BIN INT_CONST_CHAR FLOAT_CONST HEX_FLOAT_CONST CHAR_CONST WCHAR_CONST U8CHAR_CONST U16CHAR_CONST U32CHAR_CONST STRING_LITERAL WSTRING_LITERAL U8STRING_LITERAL U16STRING_LITERAL U32STRING_LITERAL PLUS MINUS TIMES DIVIDE MOD OR AND NOT XOR LSHIFT RSHIFT LOR LAND LNOT LT LE GT GE EQ NE EQUALS TIMESEQUAL DIVEQUAL MODEQUAL PLUSEQUAL MINUSEQUAL LSHIFTEQUAL RSHIFTEQUAL ANDEQUAL XOREQUAL OREQUAL PLUSPLUS MINUSMINUS ARROW CONDOP LPAREN RPAREN LBRACKET RBRACKET LBRACE RBRACE COMMA PERIOD SEMI COLON ELLIPSIS PPHASH PPPRAGMA PPPRAGMASTRabstract_declarator_opt : empty\n| abstract_declaratorassignment_expression_opt : empty\n| assignment_expressionblock_item_list_opt : empty\n| block_item_listdeclaration_list_opt : empty\n| declaration_listdeclaration_specifiers_no_type_opt : empty\n| declaration_specifiers_no_typedesignation_opt : empty\n| designationexpression_opt : empty\n| expressionid_init_declarator_list_opt : empty\n| id_init_declarator_listidentifier_list_opt : empty\n| identifier_listinit_declarator_list_opt : empty\n| init_declarator_listinitializer_list_opt : empty\n| initializer_listparameter_type_list_opt : empty\n| parameter_type_liststruct_declarator_list_opt : empty\n| struct_declarator_listtype_qualifier_list_opt : empty\n| type_qualifier_list direct_id_declarator : ID\n direct_id_declarator : LPAREN id_declarator RPAREN\n direct_id_declarator : direct_id_declarator LBRACKET type_qualifier_list_opt assignment_expression_opt RBRACKET\n direct_id_declarator : direct_id_declarator LBRACKET STATIC type_qualifier_list_opt assignment_expression RBRACKET\n | direct_id_declarator LBRACKET type_qualifier_list STATIC assignment_expression RBRACKET\n direct_id_declarator : direct_id_declarator LBRACKET type_qualifier_list_opt TIMES RBRACKET\n direct_id_declarator : direct_id_declarator LPAREN parameter_type_list RPAREN\n | direct_id_declarator LPAREN identifier_list_opt RPAREN\n direct_typeid_declarator : TYPEID\n direct_typeid_declarator : LPAREN typeid_declarator RPAREN\n direct_typeid_declarator : direct_typeid_declarator LBRACKET type_qualifier_list_opt assignment_expression_opt RBRACKET\n direct_typeid_declarator : direct_typeid_declarator LBRACKET STATIC type_qualifier_list_opt assignment_expression RBRACKET\n | direct_typeid_declarator LBRACKET type_qualifier_list STATIC assignment_expression RBRACKET\n direct_typeid_declarator : direct_typeid_declarator LBRACKET type_qualifier_list_opt TIMES RBRACKET\n direct_typeid_declarator : direct_typeid_declarator LPAREN parameter_type_list RPAREN\n | direct_typeid_declarator LPAREN identifier_list_opt RPAREN\n direct_typeid_noparen_declarator : TYPEID\n direct_typeid_noparen_declarator : direct_typeid_noparen_declarator LBRACKET type_qualifier_list_opt assignment_expression_opt RBRACKET\n direct_typeid_noparen_declarator : direct_typeid_noparen_declarator LBRACKET STATIC type_qualifier_list_opt assignment_expression RBRACKET\n | direct_typeid_noparen_declarator LBRACKET type_qualifier_list STATIC assignment_expression RBRACKET\n direct_typeid_noparen_declarator : direct_typeid_noparen_declarator LBRACKET type_qualifier_list_opt TIMES RBRACKET\n direct_typeid_noparen_declarator : direct_typeid_noparen_declarator LPAREN parameter_type_list RPAREN\n | direct_typeid_noparen_declarator LPAREN identifier_list_opt RPAREN\n id_declarator : direct_id_declarator\n id_declarator : pointer direct_id_declarator\n typeid_declarator : direct_typeid_declarator\n typeid_declarator : pointer direct_typeid_declarator\n typeid_noparen_declarator : direct_typeid_noparen_declarator\n typeid_noparen_declarator : pointer direct_typeid_noparen_declarator\n translation_unit_or_empty : translation_unit\n | empty\n translation_unit : external_declaration\n translation_unit : translation_unit external_declaration\n external_declaration : function_definition\n external_declaration : declaration\n external_declaration : pp_directive\n | pppragma_directive\n external_declaration : SEMI\n external_declaration : static_assert\n static_assert : _STATIC_ASSERT LPAREN constant_expression COMMA unified_string_literal RPAREN\n | _STATIC_ASSERT LPAREN constant_expression RPAREN\n pp_directive : PPHASH\n pppragma_directive : PPPRAGMA\n | PPPRAGMA PPPRAGMASTR\n | _PRAGMA LPAREN unified_string_literal RPAREN\n pppragma_directive_list : pppragma_directive\n | pppragma_directive_list pppragma_directive\n function_definition : id_declarator declaration_list_opt compound_statement\n function_definition : declaration_specifiers id_declarator declaration_list_opt compound_statement\n statement : labeled_statement\n | expression_statement\n | compound_statement\n | selection_statement\n | iteration_statement\n | jump_statement\n | pppragma_directive\n | static_assert\n pragmacomp_or_statement : pppragma_directive_list statement\n | statement\n decl_body : declaration_specifiers init_declarator_list_opt\n | declaration_specifiers_no_type id_init_declarator_list_opt\n declaration : decl_body SEMI\n declaration_list : declaration\n | declaration_list declaration\n declaration_specifiers_no_type : type_qualifier declaration_specifiers_no_type_opt\n declaration_specifiers_no_type : storage_class_specifier declaration_specifiers_no_type_opt\n declaration_specifiers_no_type : function_specifier declaration_specifiers_no_type_opt\n declaration_specifiers_no_type : atomic_specifier declaration_specifiers_no_type_opt\n declaration_specifiers_no_type : alignment_specifier declaration_specifiers_no_type_opt\n declaration_specifiers : declaration_specifiers type_qualifier\n declaration_specifiers : declaration_specifiers storage_class_specifier\n declaration_specifiers : declaration_specifiers function_specifier\n declaration_specifiers : declaration_specifiers type_specifier_no_typeid\n declaration_specifiers : type_specifier\n declaration_specifiers : declaration_specifiers_no_type type_specifier\n declaration_specifiers : declaration_specifiers alignment_specifier\n storage_class_specifier : AUTO\n | REGISTER\n | STATIC\n | EXTERN\n | TYPEDEF\n | _THREAD_LOCAL\n function_specifier : INLINE\n | _NORETURN\n type_specifier_no_typeid : VOID\n | _BOOL\n | CHAR\n | SHORT\n | INT\n | LONG\n | FLOAT\n | DOUBLE\n | _COMPLEX\n | SIGNED\n | UNSIGNED\n | __INT128\n type_specifier : typedef_name\n | enum_specifier\n | struct_or_union_specifier\n | type_specifier_no_typeid\n | atomic_specifier\n atomic_specifier : _ATOMIC LPAREN type_name RPAREN\n type_qualifier : CONST\n | RESTRICT\n | VOLATILE\n | _ATOMIC\n init_declarator_list : init_declarator\n | init_declarator_list COMMA init_declarator\n init_declarator : declarator\n | declarator EQUALS initializer\n id_init_declarator_list : id_init_declarator\n | id_init_declarator_list COMMA init_declarator\n id_init_declarator : id_declarator\n | id_declarator EQUALS initializer\n specifier_qualifier_list : specifier_qualifier_list type_specifier_no_typeid\n specifier_qualifier_list : specifier_qualifier_list type_qualifier\n specifier_qualifier_list : type_specifier\n specifier_qualifier_list : type_qualifier_list type_specifier\n specifier_qualifier_list : alignment_specifier\n specifier_qualifier_list : specifier_qualifier_list alignment_specifier\n struct_or_union_specifier : struct_or_union ID\n | struct_or_union TYPEID\n struct_or_union_specifier : struct_or_union brace_open struct_declaration_list brace_close\n | struct_or_union brace_open brace_close\n struct_or_union_specifier : struct_or_union ID brace_open struct_declaration_list brace_close\n | struct_or_union ID brace_open brace_close\n | struct_or_union TYPEID brace_open struct_declaration_list brace_close\n | struct_or_union TYPEID brace_open brace_close\n struct_or_union : STRUCT\n | UNION\n struct_declaration_list : struct_declaration\n | struct_declaration_list struct_declaration\n struct_declaration : specifier_qualifier_list struct_declarator_list_opt SEMI\n struct_declaration : SEMI\n struct_declaration : pppragma_directive\n struct_declarator_list : struct_declarator\n | struct_declarator_list COMMA struct_declarator\n struct_declarator : declarator\n struct_declarator : declarator COLON constant_expression\n | COLON constant_expression\n enum_specifier : ENUM ID\n | ENUM TYPEID\n enum_specifier : ENUM brace_open enumerator_list brace_close\n enum_specifier : ENUM ID brace_open enumerator_list brace_close\n | ENUM TYPEID brace_open enumerator_list brace_close\n enumerator_list : enumerator\n | enumerator_list COMMA\n | enumerator_list COMMA enumerator\n alignment_specifier : _ALIGNAS LPAREN type_name RPAREN\n | _ALIGNAS LPAREN constant_expression RPAREN\n enumerator : ID\n | ID EQUALS constant_expression\n declarator : id_declarator\n | typeid_declarator\n pointer : TIMES type_qualifier_list_opt\n | TIMES type_qualifier_list_opt pointer\n type_qualifier_list : type_qualifier\n | type_qualifier_list type_qualifier\n parameter_type_list : parameter_list\n | parameter_list COMMA ELLIPSIS\n parameter_list : parameter_declaration\n | parameter_list COMMA parameter_declaration\n parameter_declaration : declaration_specifiers id_declarator\n | declaration_specifiers typeid_noparen_declarator\n parameter_declaration : declaration_specifiers abstract_declarator_opt\n identifier_list : identifier\n | identifier_list COMMA identifier\n initializer : assignment_expression\n initializer : brace_open initializer_list_opt brace_close\n | brace_open initializer_list COMMA brace_close\n initializer_list : designation_opt initializer\n | initializer_list COMMA designation_opt initializer\n designation : designator_list EQUALS\n designator_list : designator\n | designator_list designator\n designator : LBRACKET constant_expression RBRACKET\n | PERIOD identifier\n type_name : specifier_qualifier_list abstract_declarator_opt\n abstract_declarator : pointer\n abstract_declarator : pointer direct_abstract_declarator\n abstract_declarator : direct_abstract_declarator\n direct_abstract_declarator : LPAREN abstract_declarator RPAREN direct_abstract_declarator : direct_abstract_declarator LBRACKET assignment_expression_opt RBRACKET\n direct_abstract_declarator : LBRACKET type_qualifier_list_opt assignment_expression_opt RBRACKET\n direct_abstract_declarator : direct_abstract_declarator LBRACKET TIMES RBRACKET\n direct_abstract_declarator : LBRACKET TIMES RBRACKET\n direct_abstract_declarator : direct_abstract_declarator LPAREN parameter_type_list_opt RPAREN\n direct_abstract_declarator : LPAREN parameter_type_list_opt RPAREN\n block_item : declaration\n | statement\n block_item_list : block_item\n | block_item_list block_item\n compound_statement : brace_open block_item_list_opt brace_close labeled_statement : ID COLON pragmacomp_or_statement labeled_statement : CASE constant_expression COLON pragmacomp_or_statement labeled_statement : DEFAULT COLON pragmacomp_or_statement selection_statement : IF LPAREN expression RPAREN pragmacomp_or_statement selection_statement : IF LPAREN expression RPAREN statement ELSE pragmacomp_or_statement selection_statement : SWITCH LPAREN expression RPAREN pragmacomp_or_statement iteration_statement : WHILE LPAREN expression RPAREN pragmacomp_or_statement iteration_statement : DO pragmacomp_or_statement WHILE LPAREN expression RPAREN SEMI iteration_statement : FOR LPAREN expression_opt SEMI expression_opt SEMI expression_opt RPAREN pragmacomp_or_statement iteration_statement : FOR LPAREN declaration expression_opt SEMI expression_opt RPAREN pragmacomp_or_statement jump_statement : GOTO ID SEMI jump_statement : BREAK SEMI jump_statement : CONTINUE SEMI jump_statement : RETURN expression SEMI\n | RETURN SEMI\n expression_statement : expression_opt SEMI expression : assignment_expression\n | expression COMMA assignment_expression\n assignment_expression : LPAREN compound_statement RPAREN typedef_name : TYPEID assignment_expression : conditional_expression\n | unary_expression assignment_operator assignment_expression\n assignment_operator : EQUALS\n | XOREQUAL\n | TIMESEQUAL\n | DIVEQUAL\n | MODEQUAL\n | PLUSEQUAL\n | MINUSEQUAL\n | LSHIFTEQUAL\n | RSHIFTEQUAL\n | ANDEQUAL\n | OREQUAL\n constant_expression : conditional_expression conditional_expression : binary_expression\n | binary_expression CONDOP expression COLON conditional_expression\n binary_expression : cast_expression\n | binary_expression TIMES binary_expression\n | binary_expression DIVIDE binary_expression\n | binary_expression MOD binary_expression\n | binary_expression PLUS binary_expression\n | binary_expression MINUS binary_expression\n | binary_expression RSHIFT binary_expression\n | binary_expression LSHIFT binary_expression\n | binary_expression LT binary_expression\n | binary_expression LE binary_expression\n | binary_expression GE binary_expression\n | binary_expression GT binary_expression\n | binary_expression EQ binary_expression\n | binary_expression NE binary_expression\n | binary_expression AND binary_expression\n | binary_expression OR binary_expression\n | binary_expression XOR binary_expression\n | binary_expression LAND binary_expression\n | binary_expression LOR binary_expression\n cast_expression : unary_expression cast_expression : LPAREN type_name RPAREN cast_expression unary_expression : postfix_expression unary_expression : PLUSPLUS unary_expression\n | MINUSMINUS unary_expression\n | unary_operator cast_expression\n unary_expression : SIZEOF unary_expression\n | SIZEOF LPAREN type_name RPAREN\n | _ALIGNOF LPAREN type_name RPAREN\n unary_operator : AND\n | TIMES\n | PLUS\n | MINUS\n | NOT\n | LNOT\n postfix_expression : primary_expression postfix_expression : postfix_expression LBRACKET expression RBRACKET postfix_expression : postfix_expression LPAREN argument_expression_list RPAREN\n | postfix_expression LPAREN RPAREN\n postfix_expression : postfix_expression PERIOD ID\n | postfix_expression PERIOD TYPEID\n | postfix_expression ARROW ID\n | postfix_expression ARROW TYPEID\n postfix_expression : postfix_expression PLUSPLUS\n | postfix_expression MINUSMINUS\n postfix_expression : LPAREN type_name RPAREN brace_open initializer_list brace_close\n | LPAREN type_name RPAREN brace_open initializer_list COMMA brace_close\n primary_expression : identifier primary_expression : constant primary_expression : unified_string_literal\n | unified_wstring_literal\n primary_expression : LPAREN expression RPAREN primary_expression : OFFSETOF LPAREN type_name COMMA offsetof_member_designator RPAREN\n offsetof_member_designator : identifier\n | offsetof_member_designator PERIOD identifier\n | offsetof_member_designator LBRACKET expression RBRACKET\n argument_expression_list : assignment_expression\n | argument_expression_list COMMA assignment_expression\n identifier : ID constant : INT_CONST_DEC\n | INT_CONST_OCT\n | INT_CONST_HEX\n | INT_CONST_BIN\n | INT_CONST_CHAR\n constant : FLOAT_CONST\n | HEX_FLOAT_CONST\n constant : CHAR_CONST\n | WCHAR_CONST\n | U8CHAR_CONST\n | U16CHAR_CONST\n | U32CHAR_CONST\n unified_string_literal : STRING_LITERAL\n | unified_string_literal STRING_LITERAL\n unified_wstring_literal : WSTRING_LITERAL\n | U8STRING_LITERAL\n | U16STRING_LITERAL\n | U32STRING_LITERAL\n | unified_wstring_literal WSTRING_LITERAL\n | unified_wstring_literal U8STRING_LITERAL\n | unified_wstring_literal U16STRING_LITERAL\n | unified_wstring_literal U32STRING_LITERAL\n brace_open : LBRACE\n brace_close : RBRACE\n empty : ' + +_lr_action_items = {'$end':([0,1,2,3,4,5,6,7,8,9,10,14,15,64,90,91,127,208,251,262,267,355,499,],[-340,0,-58,-59,-60,-62,-63,-64,-65,-66,-67,-70,-71,-61,-90,-72,-76,-339,-77,-73,-69,-221,-68,]),'SEMI':([0,2,4,5,6,7,8,9,10,12,13,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,69,70,71,72,73,74,75,76,77,78,79,81,83,84,85,86,87,88,89,90,91,97,98,99,100,101,102,103,104,105,106,107,108,110,111,112,117,118,119,121,122,123,124,127,128,130,132,139,140,143,144,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,203,204,205,206,207,208,209,210,211,212,214,220,221,222,223,224,225,226,227,228,229,230,231,232,233,236,239,242,245,246,247,248,249,250,251,252,253,254,255,262,263,267,291,292,293,295,296,297,300,301,302,303,311,312,326,327,330,333,334,335,336,337,338,339,340,341,342,343,344,345,346,348,349,353,354,355,356,357,358,360,361,369,370,371,372,373,374,375,376,377,403,404,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,439,440,459,460,463,464,465,468,469,470,471,473,475,479,480,481,482,483,484,485,486,493,494,497,499,501,502,505,506,508,509,522,523,524,525,526,527,529,530,531,535,536,538,552,553,554,555,556,558,561,563,570,571,574,579,580,582,584,585,586,],[9,9,-60,-62,-63,-64,-65,-66,-67,-340,90,-70,-71,-52,-340,-340,-340,-128,-102,-340,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,-340,-340,-129,-134,-181,-98,-99,-100,-101,-104,-88,-134,-19,-20,-135,-137,-182,-54,-37,-90,-72,-53,-93,-9,-10,-340,-94,-95,-103,-89,-129,-15,-16,-139,-141,-97,-96,-169,-170,-338,-149,-150,210,-76,-340,-181,-55,-328,-30,-306,-255,-256,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,210,210,210,-152,-159,-339,-340,-162,-163,-145,-147,-13,-340,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-315,361,-14,-340,374,375,377,-238,-242,-277,-77,-38,-136,-138,-196,-73,-329,-69,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-35,-36,-140,-142,-171,210,-154,210,-156,-151,-160,465,-143,-144,-148,-25,-26,-164,-166,-146,-130,-177,-178,-221,-220,-13,-340,-340,-237,-340,-87,-74,-340,483,-233,-234,484,-236,-43,-44,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,-273,-274,-275,-276,-295,-296,-297,-298,-299,-31,-34,-172,-173,-153,-155,-161,-168,-222,-340,-224,-240,-239,-86,-75,529,-340,-232,-235,-243,-197,-39,-42,-278,-68,-293,-294,-284,-285,-32,-33,-165,-167,-223,-340,-340,-340,-340,559,-198,-40,-41,-257,-225,-87,-74,-227,-228,572,-302,-309,-340,580,-303,-226,-229,-340,-340,-231,-230,]),'PPHASH':([0,2,4,5,6,7,8,9,10,14,15,64,90,91,127,208,251,262,267,355,499,],[14,14,-60,-62,-63,-64,-65,-66,-67,-70,-71,-61,-90,-72,-76,-339,-77,-73,-69,-221,-68,]),'PPPRAGMA':([0,2,4,5,6,7,8,9,10,14,15,64,90,91,121,124,127,128,203,204,205,207,208,210,211,221,222,223,224,225,226,227,228,229,230,231,232,242,251,262,267,333,335,338,355,356,358,360,361,369,370,371,374,375,377,465,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[15,15,-60,-62,-63,-64,-65,-66,-67,-70,-71,-61,-90,-72,-338,15,-76,15,15,15,15,-159,-339,-162,-163,15,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,15,-77,-73,-69,15,15,-160,-221,-220,15,15,-237,15,-87,-74,-233,-234,-236,-161,-222,15,-224,-86,-75,-232,-235,-68,-223,15,15,15,-225,-87,-74,-227,-228,15,-226,-229,15,15,-231,-230,]),'_PRAGMA':([0,2,4,5,6,7,8,9,10,14,15,64,90,91,121,124,127,128,203,204,205,207,208,210,211,221,222,223,224,225,226,227,228,229,230,231,232,242,251,262,267,333,335,338,355,356,358,360,361,369,370,371,374,375,377,465,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[16,16,-60,-62,-63,-64,-65,-66,-67,-70,-71,-61,-90,-72,-338,16,-76,16,16,16,16,-159,-339,-162,-163,16,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,16,-77,-73,-69,16,16,-160,-221,-220,16,16,-237,16,-87,-74,-233,-234,-236,-161,-222,16,-224,-86,-75,-232,-235,-68,-223,16,16,16,-225,-87,-74,-227,-228,16,-226,-229,16,16,-231,-230,]),'_STATIC_ASSERT':([0,2,4,5,6,7,8,9,10,14,15,64,90,91,121,127,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,251,262,267,355,356,358,360,361,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[18,18,-60,-62,-63,-64,-65,-66,-67,-70,-71,-61,-90,-72,-338,-76,18,-339,18,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,18,-77,-73,-69,-221,-220,18,18,-237,18,-87,-74,-233,-234,-236,-222,18,-224,-86,-75,-232,-235,-68,-223,18,18,18,-225,-87,-74,-227,-228,18,-226,-229,18,18,-231,-230,]),'ID':([0,2,4,5,6,7,8,9,10,12,14,15,17,20,21,22,23,24,25,26,27,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,62,63,64,69,70,71,72,74,75,76,77,78,80,81,82,90,91,94,95,96,98,99,100,101,102,103,104,106,112,113,114,115,116,117,118,119,120,121,122,123,126,127,128,134,135,136,137,141,147,148,149,150,153,154,155,156,160,161,182,183,184,192,194,195,196,197,198,199,206,208,209,212,214,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,244,247,251,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,294,298,306,309,310,314,318,322,323,330,331,332,334,336,337,340,341,342,347,348,349,353,354,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,398,400,401,402,405,448,449,452,455,457,459,460,463,464,466,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,507,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,564,565,570,572,579,580,582,584,585,586,],[28,28,-60,-62,-63,-64,-65,-66,-67,28,-70,-71,28,28,-340,-340,-340,-128,-102,28,-340,-107,-340,-125,-126,-127,-129,-241,118,122,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-157,-158,-61,28,28,-129,-134,-98,-99,-100,-101,-104,28,-134,28,-90,-72,159,-340,159,-93,-9,-10,-340,-94,-95,-103,-129,-97,-183,-27,-28,-185,-96,-169,-170,202,-338,-149,-150,159,-76,233,28,159,-340,159,159,-287,-288,-289,-286,159,159,159,159,-290,-291,159,-340,-28,28,28,159,-184,-186,202,202,-152,-339,28,-145,-147,233,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,159,159,233,373,159,-77,-340,159,-340,-28,-73,-69,159,159,159,159,159,159,159,159,159,159,159,159,159,159,159,159,159,159,159,159,159,431,433,159,159,-287,159,159,159,28,28,-340,-171,202,159,-154,-156,-151,-143,-144,-148,159,-146,-130,-177,-178,-221,-220,233,233,-237,159,159,159,159,233,-87,-74,159,-233,-234,-236,159,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,159,-12,159,159,-287,159,159,159,-340,159,28,159,159,-172,-173,-153,-155,28,159,-222,233,-224,159,-86,-75,159,-232,-235,-340,-201,-340,-68,159,159,159,159,-340,-28,-287,-223,233,233,233,159,159,159,-11,-287,159,159,-225,-87,-74,-227,-228,159,-340,159,159,233,159,-226,-229,233,233,-231,-230,]),'LPAREN':([0,2,4,5,6,7,8,9,10,12,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,64,69,70,71,72,74,75,76,77,78,80,81,82,88,89,90,91,94,95,97,98,99,100,101,102,103,104,106,109,112,113,114,115,116,117,118,119,121,122,123,126,127,128,132,134,135,136,139,140,141,143,147,148,149,150,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,192,194,195,196,197,206,208,209,212,214,216,221,222,223,224,225,226,227,228,229,230,231,232,233,234,237,238,240,241,242,243,247,251,252,256,257,258,259,262,263,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,291,292,294,298,300,301,302,303,306,309,310,311,312,318,319,322,323,324,325,330,332,334,336,337,340,341,342,347,348,349,351,352,353,354,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,403,404,405,406,429,431,432,433,434,439,440,446,447,448,452,455,457,459,460,463,464,466,467,469,470,471,474,478,479,480,482,483,484,487,489,493,494,498,499,500,501,502,503,508,509,510,511,512,515,516,518,520,524,525,526,527,528,529,532,533,535,536,543,544,545,546,547,548,549,550,551,552,553,554,555,556,559,561,562,563,565,566,567,570,572,574,577,578,579,580,582,584,585,586,],[17,17,-60,-62,-63,-64,-65,-66,-67,82,-70,-71,92,17,94,96,17,-340,-340,-340,-128,-102,17,-340,-29,-107,-340,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,125,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,126,-61,82,17,-129,125,-98,-99,-100,-101,-104,82,-134,82,137,-37,-90,-72,141,-340,96,-93,-9,-10,-340,-94,-95,-103,-129,125,-97,-183,-27,-28,-185,-96,-169,-170,-338,-149,-150,141,-76,238,137,82,238,-340,-328,-30,238,-306,-287,-288,-289,-286,288,294,294,141,298,299,-292,-315,-290,-291,-304,-305,-307,304,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,238,-340,-28,322,82,238,-184,-186,-152,-339,82,-145,-147,351,238,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-315,141,362,238,366,367,238,372,238,-77,-38,-340,238,-340,-28,-73,-329,-69,238,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,141,238,238,-300,-301,238,238,-334,-335,-336,-337,-287,238,238,-35,-36,322,449,322,-340,-45,458,-171,141,-154,-156,-151,-143,-144,-148,141,-146,-130,351,351,-177,-178,-221,-220,238,238,-237,238,238,238,238,238,-87,-74,238,-233,-234,-236,238,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,238,-12,141,-287,238,238,-43,-44,141,-308,-295,-296,-297,-298,-299,-31,-34,449,458,-340,322,238,238,-172,-173,-153,-155,82,141,-222,238,-224,141,528,-86,-75,238,-232,-235,-340,-201,-39,-42,-340,-68,141,-293,-294,238,-32,-33,238,-340,-28,-210,-216,-214,-287,-223,238,238,238,238,238,238,-11,-40,-41,-287,238,238,-50,-51,-212,-211,-213,-215,-225,-87,-74,-227,-228,238,-302,-340,-309,238,-46,-49,238,238,-303,-47,-48,-226,-229,238,238,-231,-230,]),'TIMES':([0,2,4,5,6,7,8,9,10,12,14,15,17,21,22,23,24,25,26,27,29,30,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,69,70,71,72,74,75,76,77,78,81,82,90,91,94,95,98,99,100,101,102,103,104,106,112,113,114,115,116,117,118,119,121,122,123,126,127,128,134,135,136,139,141,143,145,146,147,148,149,150,151,152,153,154,155,156,158,159,160,161,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,192,194,195,197,206,208,209,212,214,216,221,222,223,224,225,226,227,228,229,230,231,232,233,234,238,242,247,250,251,256,257,258,259,262,263,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,291,292,293,294,295,296,297,298,300,301,302,303,306,309,310,322,323,330,332,334,336,337,340,341,342,347,348,349,351,353,354,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,448,455,457,459,460,463,464,466,467,469,470,471,474,479,480,482,483,484,487,489,497,498,499,500,501,502,503,505,506,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,561,562,563,565,570,572,574,579,580,582,584,585,586,],[30,30,-60,-62,-63,-64,-65,-66,-67,30,-70,-71,30,-340,-340,-340,-128,-102,30,-340,-107,-340,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,30,30,-129,-134,-98,-99,-100,-101,-104,-134,30,-90,-72,147,-340,-93,-9,-10,-340,-94,-95,-103,-129,-97,30,-27,-28,-185,-96,-169,-170,-338,-149,-150,147,-76,147,30,147,-340,-328,147,-306,269,-258,-287,-288,-289,-286,-277,-279,147,147,147,147,-292,-315,-290,-291,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,306,-340,-28,30,30,147,-186,-152,-339,30,-145,-147,30,147,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-315,147,147,147,147,-277,-77,-340,400,-340,-28,-73,-329,-69,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,147,-300,-301,-280,147,-281,-282,-283,147,-334,-335,-336,-337,-287,147,147,30,456,-171,147,-154,-156,-151,-143,-144,-148,147,-146,-130,30,-177,-178,-221,-220,147,147,-237,147,147,147,147,147,-87,-74,147,-233,-234,-236,147,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,147,-12,147,-287,147,147,147,-308,-259,-260,-261,269,269,269,269,269,269,269,269,269,269,269,269,269,269,269,-295,-296,-297,-298,-299,-340,147,520,-172,-173,-153,-155,30,147,-222,147,-224,147,-86,-75,147,-232,-235,-340,-201,-278,-340,-68,147,-293,-294,147,-284,-285,543,-340,-28,-287,-223,147,147,147,147,147,147,-11,-287,147,147,-225,-87,-74,-227,-228,147,-302,-340,-309,147,147,147,-303,-226,-229,147,147,-231,-230,]),'TYPEID':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,62,63,64,67,68,69,70,71,72,73,74,75,76,77,78,80,81,82,90,91,96,97,98,99,100,101,102,103,104,106,112,113,114,115,116,117,118,119,121,122,123,124,125,126,127,128,129,134,137,140,141,192,193,194,196,197,203,204,205,206,207,208,209,210,211,212,213,214,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,289,290,294,298,299,304,311,312,313,318,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,466,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[35,35,-60,-62,-63,-64,-65,-66,-67,35,89,-70,-71,-52,-340,-340,-340,-128,-102,35,-340,-29,-107,-340,-125,-126,-127,-129,-241,119,123,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-157,-158,-61,35,-91,89,35,-129,-134,35,-98,-99,-100,-101,-104,89,-134,89,-90,-72,35,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-183,-27,-28,-185,-96,-169,-170,-338,-149,-150,35,35,35,-76,35,-92,89,35,-30,35,324,35,89,-184,-186,35,35,35,-152,-159,-339,89,-162,-163,-145,35,-147,35,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,35,-77,-73,-69,432,434,35,35,35,35,-35,-36,35,324,35,-171,35,-154,35,-156,-151,-160,-143,-144,-148,-146,-130,35,-177,-178,-221,-220,-237,-87,-84,35,-233,-234,-236,-31,-34,35,35,-172,-173,-153,-155,-161,89,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'ENUM':([0,2,4,5,6,7,8,9,10,11,14,15,19,21,22,23,26,27,28,29,34,50,51,52,53,54,55,56,57,58,59,60,64,67,68,70,71,72,73,90,91,96,97,98,99,100,101,102,103,112,116,117,121,124,125,126,127,128,129,137,140,141,193,197,203,204,205,207,208,210,211,213,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,333,335,338,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[36,36,-60,-62,-63,-64,-65,-66,-67,36,-70,-71,-52,-340,-340,-340,36,-340,-29,-107,-340,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,36,-91,36,-340,-134,36,-90,-72,36,-53,-93,-9,-10,-340,-94,-95,-97,-185,-96,-338,36,36,36,-76,36,-92,36,-30,36,36,-186,36,36,36,-159,-339,-162,-163,36,36,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,36,-77,-73,-69,36,36,36,36,-35,-36,36,36,36,36,-160,-130,36,-177,-178,-221,-220,-237,-87,-84,36,-233,-234,-236,-31,-34,36,36,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'VOID':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[38,38,-60,-62,-63,-64,-65,-66,-67,38,38,-70,-71,-52,-340,-340,-340,-128,-102,38,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,38,-91,38,38,-129,-134,38,-98,-99,-100,-101,-104,-134,-90,-72,38,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,38,38,38,-76,38,-92,38,-30,38,38,38,-186,38,38,38,-152,-159,-339,38,-162,-163,-145,38,-147,38,38,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,38,-77,-73,-69,38,38,38,38,-35,-36,38,38,-171,38,-154,38,-156,-151,-160,-143,-144,-148,-146,-130,38,-177,-178,-221,-220,-237,-87,-84,38,-233,-234,-236,-31,-34,38,38,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'_BOOL':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[39,39,-60,-62,-63,-64,-65,-66,-67,39,39,-70,-71,-52,-340,-340,-340,-128,-102,39,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,39,-91,39,39,-129,-134,39,-98,-99,-100,-101,-104,-134,-90,-72,39,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,39,39,39,-76,39,-92,39,-30,39,39,39,-186,39,39,39,-152,-159,-339,39,-162,-163,-145,39,-147,39,39,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,39,-77,-73,-69,39,39,39,39,-35,-36,39,39,-171,39,-154,39,-156,-151,-160,-143,-144,-148,-146,-130,39,-177,-178,-221,-220,-237,-87,-84,39,-233,-234,-236,-31,-34,39,39,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'CHAR':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[40,40,-60,-62,-63,-64,-65,-66,-67,40,40,-70,-71,-52,-340,-340,-340,-128,-102,40,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,40,-91,40,40,-129,-134,40,-98,-99,-100,-101,-104,-134,-90,-72,40,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,40,40,40,-76,40,-92,40,-30,40,40,40,-186,40,40,40,-152,-159,-339,40,-162,-163,-145,40,-147,40,40,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,40,-77,-73,-69,40,40,40,40,-35,-36,40,40,-171,40,-154,40,-156,-151,-160,-143,-144,-148,-146,-130,40,-177,-178,-221,-220,-237,-87,-84,40,-233,-234,-236,-31,-34,40,40,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'SHORT':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[41,41,-60,-62,-63,-64,-65,-66,-67,41,41,-70,-71,-52,-340,-340,-340,-128,-102,41,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,41,-91,41,41,-129,-134,41,-98,-99,-100,-101,-104,-134,-90,-72,41,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,41,41,41,-76,41,-92,41,-30,41,41,41,-186,41,41,41,-152,-159,-339,41,-162,-163,-145,41,-147,41,41,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,41,-77,-73,-69,41,41,41,41,-35,-36,41,41,-171,41,-154,41,-156,-151,-160,-143,-144,-148,-146,-130,41,-177,-178,-221,-220,-237,-87,-84,41,-233,-234,-236,-31,-34,41,41,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'INT':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[42,42,-60,-62,-63,-64,-65,-66,-67,42,42,-70,-71,-52,-340,-340,-340,-128,-102,42,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,42,-91,42,42,-129,-134,42,-98,-99,-100,-101,-104,-134,-90,-72,42,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,42,42,42,-76,42,-92,42,-30,42,42,42,-186,42,42,42,-152,-159,-339,42,-162,-163,-145,42,-147,42,42,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,42,-77,-73,-69,42,42,42,42,-35,-36,42,42,-171,42,-154,42,-156,-151,-160,-143,-144,-148,-146,-130,42,-177,-178,-221,-220,-237,-87,-84,42,-233,-234,-236,-31,-34,42,42,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'LONG':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[43,43,-60,-62,-63,-64,-65,-66,-67,43,43,-70,-71,-52,-340,-340,-340,-128,-102,43,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,43,-91,43,43,-129,-134,43,-98,-99,-100,-101,-104,-134,-90,-72,43,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,43,43,43,-76,43,-92,43,-30,43,43,43,-186,43,43,43,-152,-159,-339,43,-162,-163,-145,43,-147,43,43,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,43,-77,-73,-69,43,43,43,43,-35,-36,43,43,-171,43,-154,43,-156,-151,-160,-143,-144,-148,-146,-130,43,-177,-178,-221,-220,-237,-87,-84,43,-233,-234,-236,-31,-34,43,43,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'FLOAT':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[44,44,-60,-62,-63,-64,-65,-66,-67,44,44,-70,-71,-52,-340,-340,-340,-128,-102,44,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,44,-91,44,44,-129,-134,44,-98,-99,-100,-101,-104,-134,-90,-72,44,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,44,44,44,-76,44,-92,44,-30,44,44,44,-186,44,44,44,-152,-159,-339,44,-162,-163,-145,44,-147,44,44,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,44,-77,-73,-69,44,44,44,44,-35,-36,44,44,-171,44,-154,44,-156,-151,-160,-143,-144,-148,-146,-130,44,-177,-178,-221,-220,-237,-87,-84,44,-233,-234,-236,-31,-34,44,44,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'DOUBLE':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[45,45,-60,-62,-63,-64,-65,-66,-67,45,45,-70,-71,-52,-340,-340,-340,-128,-102,45,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,45,-91,45,45,-129,-134,45,-98,-99,-100,-101,-104,-134,-90,-72,45,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,45,45,45,-76,45,-92,45,-30,45,45,45,-186,45,45,45,-152,-159,-339,45,-162,-163,-145,45,-147,45,45,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,45,-77,-73,-69,45,45,45,45,-35,-36,45,45,-171,45,-154,45,-156,-151,-160,-143,-144,-148,-146,-130,45,-177,-178,-221,-220,-237,-87,-84,45,-233,-234,-236,-31,-34,45,45,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'_COMPLEX':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[46,46,-60,-62,-63,-64,-65,-66,-67,46,46,-70,-71,-52,-340,-340,-340,-128,-102,46,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,46,-91,46,46,-129,-134,46,-98,-99,-100,-101,-104,-134,-90,-72,46,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,46,46,46,-76,46,-92,46,-30,46,46,46,-186,46,46,46,-152,-159,-339,46,-162,-163,-145,46,-147,46,46,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,46,-77,-73,-69,46,46,46,46,-35,-36,46,46,-171,46,-154,46,-156,-151,-160,-143,-144,-148,-146,-130,46,-177,-178,-221,-220,-237,-87,-84,46,-233,-234,-236,-31,-34,46,46,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'SIGNED':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[47,47,-60,-62,-63,-64,-65,-66,-67,47,47,-70,-71,-52,-340,-340,-340,-128,-102,47,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,47,-91,47,47,-129,-134,47,-98,-99,-100,-101,-104,-134,-90,-72,47,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,47,47,47,-76,47,-92,47,-30,47,47,47,-186,47,47,47,-152,-159,-339,47,-162,-163,-145,47,-147,47,47,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,47,-77,-73,-69,47,47,47,47,-35,-36,47,47,-171,47,-154,47,-156,-151,-160,-143,-144,-148,-146,-130,47,-177,-178,-221,-220,-237,-87,-84,47,-233,-234,-236,-31,-34,47,47,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'UNSIGNED':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[48,48,-60,-62,-63,-64,-65,-66,-67,48,48,-70,-71,-52,-340,-340,-340,-128,-102,48,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,48,-91,48,48,-129,-134,48,-98,-99,-100,-101,-104,-134,-90,-72,48,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,48,48,48,-76,48,-92,48,-30,48,48,48,-186,48,48,48,-152,-159,-339,48,-162,-163,-145,48,-147,48,48,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,48,-77,-73,-69,48,48,48,48,-35,-36,48,48,-171,48,-154,48,-156,-151,-160,-143,-144,-148,-146,-130,48,-177,-178,-221,-220,-237,-87,-84,48,-233,-234,-236,-31,-34,48,48,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'__INT128':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,96,97,98,99,100,101,102,103,104,106,112,116,117,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[49,49,-60,-62,-63,-64,-65,-66,-67,49,49,-70,-71,-52,-340,-340,-340,-128,-102,49,-340,-29,-107,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,49,-91,49,49,-129,-134,49,-98,-99,-100,-101,-104,-134,-90,-72,49,-53,-93,-9,-10,-340,-94,-95,-103,-129,-97,-185,-96,-169,-170,-338,-149,-150,49,49,49,-76,49,-92,49,-30,49,49,49,-186,49,49,49,-152,-159,-339,49,-162,-163,-145,49,-147,49,49,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,49,-77,-73,-69,49,49,49,49,-35,-36,49,49,-171,49,-154,49,-156,-151,-160,-143,-144,-148,-146,-130,49,-177,-178,-221,-220,-237,-87,-84,49,-233,-234,-236,-31,-34,49,49,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'_ATOMIC':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,70,71,72,73,74,75,76,77,78,81,90,91,95,96,97,98,99,100,101,102,103,104,106,112,115,116,117,118,119,121,122,123,124,125,126,127,128,129,136,137,140,141,183,184,192,193,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,258,259,262,267,294,298,299,304,311,312,313,322,323,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,448,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,511,512,524,552,553,554,555,556,579,580,585,586,],[50,50,-60,-62,-63,-64,-65,-66,-67,72,81,-70,-71,-52,72,72,72,-128,-102,109,72,-29,-107,81,-125,-126,-127,72,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,72,-91,81,109,72,-134,72,-98,-99,-100,-101,-104,-134,-90,-72,81,50,-53,-93,-9,-10,72,-94,-95,-103,-129,-97,81,-185,-96,-169,-170,-338,-149,-150,50,50,50,-76,72,-92,81,50,-30,50,81,81,81,109,-186,50,50,50,-152,-159,-339,81,-162,-163,-145,72,-147,81,72,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,50,-77,81,81,-73,-69,50,50,50,50,-35,-36,50,50,81,-171,50,-154,50,-156,-151,-160,-143,-144,-148,-146,-130,50,-177,-178,-221,-220,-237,-87,-84,72,-233,-234,-236,-31,-34,81,50,50,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,81,81,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'CONST':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,30,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,95,96,97,101,104,106,115,116,118,119,121,122,123,124,125,126,127,128,129,136,137,140,141,183,184,192,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,258,259,262,267,294,298,299,304,311,312,313,322,323,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,448,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,511,512,524,552,553,554,555,556,579,580,585,586,],[51,51,-60,-62,-63,-64,-65,-66,-67,51,51,-70,-71,-52,51,51,51,-128,-102,51,-29,-107,51,-125,-126,-127,51,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,51,-91,51,51,-134,51,-98,-99,-100,-101,-104,-134,-90,-72,51,51,-53,51,-103,-129,51,-185,-169,-170,-338,-149,-150,51,51,51,-76,51,-92,51,51,-30,51,51,51,51,-186,51,51,51,-152,-159,-339,51,-162,-163,-145,51,-147,51,51,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,51,-77,51,51,-73,-69,51,51,51,51,-35,-36,51,51,51,-171,51,-154,51,-156,-151,-160,-143,-144,-148,-146,-130,51,-177,-178,-221,-220,-237,-87,-84,51,-233,-234,-236,-31,-34,51,51,51,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,51,51,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'RESTRICT':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,30,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,95,96,97,101,104,106,115,116,118,119,121,122,123,124,125,126,127,128,129,136,137,140,141,183,184,192,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,258,259,262,267,294,298,299,304,311,312,313,322,323,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,448,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,511,512,524,552,553,554,555,556,579,580,585,586,],[52,52,-60,-62,-63,-64,-65,-66,-67,52,52,-70,-71,-52,52,52,52,-128,-102,52,-29,-107,52,-125,-126,-127,52,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,52,-91,52,52,-134,52,-98,-99,-100,-101,-104,-134,-90,-72,52,52,-53,52,-103,-129,52,-185,-169,-170,-338,-149,-150,52,52,52,-76,52,-92,52,52,-30,52,52,52,52,-186,52,52,52,-152,-159,-339,52,-162,-163,-145,52,-147,52,52,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,52,-77,52,52,-73,-69,52,52,52,52,-35,-36,52,52,52,-171,52,-154,52,-156,-151,-160,-143,-144,-148,-146,-130,52,-177,-178,-221,-220,-237,-87,-84,52,-233,-234,-236,-31,-34,52,52,52,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,52,52,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'VOLATILE':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,30,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,95,96,97,101,104,106,115,116,118,119,121,122,123,124,125,126,127,128,129,136,137,140,141,183,184,192,197,203,204,205,206,207,208,209,210,211,212,213,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,258,259,262,267,294,298,299,304,311,312,313,322,323,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,448,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,511,512,524,552,553,554,555,556,579,580,585,586,],[53,53,-60,-62,-63,-64,-65,-66,-67,53,53,-70,-71,-52,53,53,53,-128,-102,53,-29,-107,53,-125,-126,-127,53,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,53,-91,53,53,-134,53,-98,-99,-100,-101,-104,-134,-90,-72,53,53,-53,53,-103,-129,53,-185,-169,-170,-338,-149,-150,53,53,53,-76,53,-92,53,53,-30,53,53,53,53,-186,53,53,53,-152,-159,-339,53,-162,-163,-145,53,-147,53,53,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,53,-77,53,53,-73,-69,53,53,53,53,-35,-36,53,53,53,-171,53,-154,53,-156,-151,-160,-143,-144,-148,-146,-130,53,-177,-178,-221,-220,-237,-87,-84,53,-233,-234,-236,-31,-34,53,53,53,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,53,53,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'AUTO':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,96,97,101,104,106,118,119,121,122,123,127,128,129,137,140,192,206,208,221,222,223,224,225,226,227,228,229,230,231,232,251,262,267,311,312,313,322,330,334,336,337,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[54,54,-60,-62,-63,-64,-65,-66,-67,54,54,-70,-71,-52,54,54,54,-128,-102,54,-29,-107,-125,-126,-127,54,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,54,-91,54,54,-134,54,-98,-99,-100,-101,-104,-134,-90,-72,54,-53,54,-103,-129,-169,-170,-338,-149,-150,-76,54,-92,54,-30,54,-152,-339,54,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-77,-73,-69,-35,-36,54,54,-171,-154,-156,-151,-130,54,-177,-178,-221,-220,-237,-87,-84,54,-233,-234,-236,-31,-34,54,54,-172,-173,-153,-155,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'REGISTER':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,96,97,101,104,106,118,119,121,122,123,127,128,129,137,140,192,206,208,221,222,223,224,225,226,227,228,229,230,231,232,251,262,267,311,312,313,322,330,334,336,337,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[55,55,-60,-62,-63,-64,-65,-66,-67,55,55,-70,-71,-52,55,55,55,-128,-102,55,-29,-107,-125,-126,-127,55,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,55,-91,55,55,-134,55,-98,-99,-100,-101,-104,-134,-90,-72,55,-53,55,-103,-129,-169,-170,-338,-149,-150,-76,55,-92,55,-30,55,-152,-339,55,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-77,-73,-69,-35,-36,55,55,-171,-154,-156,-151,-130,55,-177,-178,-221,-220,-237,-87,-84,55,-233,-234,-236,-31,-34,55,55,-172,-173,-153,-155,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'STATIC':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,95,96,97,101,104,106,116,118,119,121,122,123,127,128,129,136,137,140,184,192,197,206,208,221,222,223,224,225,226,227,228,229,230,231,232,251,259,262,267,311,312,313,322,330,334,336,337,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,448,449,458,459,460,463,464,469,471,479,480,483,484,499,508,509,512,524,552,553,554,555,556,579,580,585,586,],[29,29,-60,-62,-63,-64,-65,-66,-67,29,29,-70,-71,-52,29,29,29,-128,-102,29,-29,-107,-125,-126,-127,29,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,29,-91,29,29,-134,29,-98,-99,-100,-101,-104,-134,-90,-72,183,29,-53,29,-103,-129,-185,-169,-170,-338,-149,-150,-76,29,-92,258,29,-30,310,29,-186,-152,-339,29,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-77,402,-73,-69,-35,-36,29,29,-171,-154,-156,-151,-130,29,-177,-178,-221,-220,-237,-87,-84,29,-233,-234,-236,-31,-34,511,29,29,-172,-173,-153,-155,-222,-224,-86,-84,-232,-235,-68,-32,-33,545,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'EXTERN':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,96,97,101,104,106,118,119,121,122,123,127,128,129,137,140,192,206,208,221,222,223,224,225,226,227,228,229,230,231,232,251,262,267,311,312,313,322,330,334,336,337,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[56,56,-60,-62,-63,-64,-65,-66,-67,56,56,-70,-71,-52,56,56,56,-128,-102,56,-29,-107,-125,-126,-127,56,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,56,-91,56,56,-134,56,-98,-99,-100,-101,-104,-134,-90,-72,56,-53,56,-103,-129,-169,-170,-338,-149,-150,-76,56,-92,56,-30,56,-152,-339,56,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-77,-73,-69,-35,-36,56,56,-171,-154,-156,-151,-130,56,-177,-178,-221,-220,-237,-87,-84,56,-233,-234,-236,-31,-34,56,56,-172,-173,-153,-155,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'TYPEDEF':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,96,97,101,104,106,118,119,121,122,123,127,128,129,137,140,192,206,208,221,222,223,224,225,226,227,228,229,230,231,232,251,262,267,311,312,313,322,330,334,336,337,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[57,57,-60,-62,-63,-64,-65,-66,-67,57,57,-70,-71,-52,57,57,57,-128,-102,57,-29,-107,-125,-126,-127,57,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,57,-91,57,57,-134,57,-98,-99,-100,-101,-104,-134,-90,-72,57,-53,57,-103,-129,-169,-170,-338,-149,-150,-76,57,-92,57,-30,57,-152,-339,57,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-77,-73,-69,-35,-36,57,57,-171,-154,-156,-151,-130,57,-177,-178,-221,-220,-237,-87,-84,57,-233,-234,-236,-31,-34,57,57,-172,-173,-153,-155,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'_THREAD_LOCAL':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,96,97,101,104,106,118,119,121,122,123,127,128,129,137,140,192,206,208,221,222,223,224,225,226,227,228,229,230,231,232,251,262,267,311,312,313,322,330,334,336,337,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[58,58,-60,-62,-63,-64,-65,-66,-67,58,58,-70,-71,-52,58,58,58,-128,-102,58,-29,-107,-125,-126,-127,58,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,58,-91,58,58,-134,58,-98,-99,-100,-101,-104,-134,-90,-72,58,-53,58,-103,-129,-169,-170,-338,-149,-150,-76,58,-92,58,-30,58,-152,-339,58,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-77,-73,-69,-35,-36,58,58,-171,-154,-156,-151,-130,58,-177,-178,-221,-220,-237,-87,-84,58,-233,-234,-236,-31,-34,58,58,-172,-173,-153,-155,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'INLINE':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,96,97,101,104,106,118,119,121,122,123,127,128,129,137,140,192,206,208,221,222,223,224,225,226,227,228,229,230,231,232,251,262,267,311,312,313,322,330,334,336,337,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[59,59,-60,-62,-63,-64,-65,-66,-67,59,59,-70,-71,-52,59,59,59,-128,-102,59,-29,-107,-125,-126,-127,59,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,59,-91,59,59,-134,59,-98,-99,-100,-101,-104,-134,-90,-72,59,-53,59,-103,-129,-169,-170,-338,-149,-150,-76,59,-92,59,-30,59,-152,-339,59,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-77,-73,-69,-35,-36,59,59,-171,-154,-156,-151,-130,59,-177,-178,-221,-220,-237,-87,-84,59,-233,-234,-236,-31,-34,59,59,-172,-173,-153,-155,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'_NORETURN':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,96,97,101,104,106,118,119,121,122,123,127,128,129,137,140,192,206,208,221,222,223,224,225,226,227,228,229,230,231,232,251,262,267,311,312,313,322,330,334,336,337,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[60,60,-60,-62,-63,-64,-65,-66,-67,60,60,-70,-71,-52,60,60,60,-128,-102,60,-29,-107,-125,-126,-127,60,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,60,-91,60,60,-134,60,-98,-99,-100,-101,-104,-134,-90,-72,60,-53,60,-103,-129,-169,-170,-338,-149,-150,-76,60,-92,60,-30,60,-152,-339,60,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-77,-73,-69,-35,-36,60,60,-171,-154,-156,-151,-130,60,-177,-178,-221,-220,-237,-87,-84,60,-233,-234,-236,-31,-34,60,60,-172,-173,-153,-155,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'_ALIGNAS':([0,2,4,5,6,7,8,9,10,11,12,14,15,19,21,22,23,24,25,27,28,29,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,64,67,68,69,71,72,73,74,75,76,77,78,81,90,91,96,97,101,104,106,118,119,121,122,123,124,125,126,127,128,129,137,140,141,192,203,204,205,206,207,208,209,210,211,212,214,216,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,330,333,334,335,336,337,338,340,341,342,348,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,459,460,463,464,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[61,61,-60,-62,-63,-64,-65,-66,-67,61,61,-70,-71,-52,61,61,61,-128,-102,61,-29,-107,-125,-126,-127,61,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,61,-91,61,61,-134,61,-98,-99,-100,-101,-104,-134,-90,-72,61,-53,61,-103,-129,-169,-170,-338,-149,-150,61,61,61,-76,61,-92,61,-30,61,61,61,61,61,-152,-159,-339,61,-162,-163,-145,-147,61,61,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,61,-77,-73,-69,61,61,61,61,-35,-36,61,61,-171,61,-154,61,-156,-151,-160,-143,-144,-148,-146,-130,61,-177,-178,-221,-220,-237,-87,-84,61,-233,-234,-236,-31,-34,61,61,-172,-173,-153,-155,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'STRUCT':([0,2,4,5,6,7,8,9,10,11,14,15,19,21,22,23,26,27,28,29,34,50,51,52,53,54,55,56,57,58,59,60,64,67,68,70,71,72,73,90,91,96,97,98,99,100,101,102,103,112,116,117,121,124,125,126,127,128,129,137,140,141,193,197,203,204,205,207,208,210,211,213,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,333,335,338,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[62,62,-60,-62,-63,-64,-65,-66,-67,62,-70,-71,-52,-340,-340,-340,62,-340,-29,-107,-340,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,62,-91,62,-340,-134,62,-90,-72,62,-53,-93,-9,-10,-340,-94,-95,-97,-185,-96,-338,62,62,62,-76,62,-92,62,-30,62,62,-186,62,62,62,-159,-339,-162,-163,62,62,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,62,-77,-73,-69,62,62,62,62,-35,-36,62,62,62,62,-160,-130,62,-177,-178,-221,-220,-237,-87,-84,62,-233,-234,-236,-31,-34,62,62,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'UNION':([0,2,4,5,6,7,8,9,10,11,14,15,19,21,22,23,26,27,28,29,34,50,51,52,53,54,55,56,57,58,59,60,64,67,68,70,71,72,73,90,91,96,97,98,99,100,101,102,103,112,116,117,121,124,125,126,127,128,129,137,140,141,193,197,203,204,205,207,208,210,211,213,221,222,223,224,225,226,227,228,229,230,231,232,238,251,262,267,294,298,299,304,311,312,313,322,333,335,338,349,351,353,354,355,356,361,370,371,372,374,375,377,439,440,449,458,465,469,471,479,480,483,484,499,508,509,524,552,553,554,555,556,579,580,585,586,],[63,63,-60,-62,-63,-64,-65,-66,-67,63,-70,-71,-52,-340,-340,-340,63,-340,-29,-107,-340,-134,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-61,63,-91,63,-340,-134,63,-90,-72,63,-53,-93,-9,-10,-340,-94,-95,-97,-185,-96,-338,63,63,63,-76,63,-92,63,-30,63,63,-186,63,63,63,-159,-339,-162,-163,63,63,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,63,-77,-73,-69,63,63,63,63,-35,-36,63,63,63,63,-160,-130,63,-177,-178,-221,-220,-237,-87,-84,63,-233,-234,-236,-31,-34,63,63,-161,-222,-224,-86,-84,-232,-235,-68,-32,-33,-223,-225,-87,-84,-227,-228,-226,-229,-231,-230,]),'LBRACE':([11,15,19,28,36,37,62,63,65,66,67,68,73,90,91,97,118,119,121,122,123,128,129,131,135,140,195,208,221,222,223,224,225,226,227,228,229,230,231,232,238,242,256,262,267,311,312,355,356,358,360,361,369,370,371,374,375,377,392,393,394,405,439,440,469,470,471,474,479,480,483,484,487,489,498,499,504,505,508,509,524,525,526,527,532,533,552,553,554,555,556,562,570,579,580,582,584,585,586,],[-340,-71,-52,-29,121,121,-157,-158,121,-7,-8,-91,-340,-90,-72,-53,121,121,-338,121,121,121,-92,121,121,-30,121,-339,121,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,121,121,-340,-73,-69,-35,-36,-221,-220,121,121,-237,121,-87,-74,-233,-234,-236,-11,121,-12,121,-31,-34,-222,121,-224,121,-86,-75,-232,-235,-340,-201,-340,-68,121,121,-32,-33,-223,121,121,121,121,-11,-225,-87,-74,-227,-228,-340,121,-226,-229,121,121,-231,-230,]),'RBRACE':([15,90,91,121,124,128,139,143,144,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,200,201,202,203,204,205,207,208,210,211,219,220,221,222,223,224,225,226,227,228,229,230,231,232,249,250,255,256,262,263,267,291,292,293,295,296,297,300,301,302,303,328,329,331,333,335,338,355,356,361,370,371,374,375,377,390,391,392,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,461,462,465,469,471,473,479,480,483,484,485,486,487,488,497,499,501,502,505,506,524,531,537,538,552,553,554,555,556,560,561,562,563,574,579,580,585,586,],[-71,-90,-72,-338,208,-340,-328,-306,-255,-256,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,208,-174,-179,208,208,208,-159,-339,-162,-163,208,-5,-6,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-242,-277,-196,-340,-73,-329,-69,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,208,208,-175,208,208,-160,-221,-220,-237,-87,-84,-233,-234,-236,208,-22,-21,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,-273,-274,-275,-276,-295,-296,-297,-298,-299,-176,-180,-161,-222,-224,-240,-86,-84,-232,-235,-243,-197,208,-199,-278,-68,-293,-294,-284,-285,-223,-198,208,-257,-225,-87,-84,-227,-228,-200,-302,208,-309,-303,-226,-229,-231,-230,]),'CASE':([15,90,91,121,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,262,267,355,356,358,360,361,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[-71,-90,-72,-338,234,-339,234,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,234,-73,-69,-221,-220,234,234,-237,234,-87,-74,-233,-234,-236,-222,234,-224,-86,-75,-232,-235,-68,-223,234,234,234,-225,-87,-74,-227,-228,234,-226,-229,234,234,-231,-230,]),'DEFAULT':([15,90,91,121,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,262,267,355,356,358,360,361,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[-71,-90,-72,-338,235,-339,235,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,235,-73,-69,-221,-220,235,235,-237,235,-87,-74,-233,-234,-236,-222,235,-224,-86,-75,-232,-235,-68,-223,235,235,235,-225,-87,-74,-227,-228,235,-226,-229,235,235,-231,-230,]),'IF':([15,90,91,121,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,262,267,355,356,358,360,361,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[-71,-90,-72,-338,237,-339,237,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,237,-73,-69,-221,-220,237,237,-237,237,-87,-74,-233,-234,-236,-222,237,-224,-86,-75,-232,-235,-68,-223,237,237,237,-225,-87,-74,-227,-228,237,-226,-229,237,237,-231,-230,]),'SWITCH':([15,90,91,121,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,262,267,355,356,358,360,361,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[-71,-90,-72,-338,240,-339,240,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,240,-73,-69,-221,-220,240,240,-237,240,-87,-74,-233,-234,-236,-222,240,-224,-86,-75,-232,-235,-68,-223,240,240,240,-225,-87,-74,-227,-228,240,-226,-229,240,240,-231,-230,]),'WHILE':([15,90,91,121,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,262,267,355,356,358,360,361,368,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[-71,-90,-72,-338,241,-339,241,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,241,-73,-69,-221,-220,241,241,-237,478,241,-87,-74,-233,-234,-236,-222,241,-224,-86,-75,-232,-235,-68,-223,241,241,241,-225,-87,-74,-227,-228,241,-226,-229,241,241,-231,-230,]),'DO':([15,90,91,121,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,262,267,355,356,358,360,361,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[-71,-90,-72,-338,242,-339,242,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,242,-73,-69,-221,-220,242,242,-237,242,-87,-74,-233,-234,-236,-222,242,-224,-86,-75,-232,-235,-68,-223,242,242,242,-225,-87,-74,-227,-228,242,-226,-229,242,242,-231,-230,]),'FOR':([15,90,91,121,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,262,267,355,356,358,360,361,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[-71,-90,-72,-338,243,-339,243,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,243,-73,-69,-221,-220,243,243,-237,243,-87,-74,-233,-234,-236,-222,243,-224,-86,-75,-232,-235,-68,-223,243,243,243,-225,-87,-74,-227,-228,243,-226,-229,243,243,-231,-230,]),'GOTO':([15,90,91,121,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,262,267,355,356,358,360,361,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[-71,-90,-72,-338,244,-339,244,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,244,-73,-69,-221,-220,244,244,-237,244,-87,-74,-233,-234,-236,-222,244,-224,-86,-75,-232,-235,-68,-223,244,244,244,-225,-87,-74,-227,-228,244,-226,-229,244,244,-231,-230,]),'BREAK':([15,90,91,121,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,262,267,355,356,358,360,361,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[-71,-90,-72,-338,245,-339,245,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,245,-73,-69,-221,-220,245,245,-237,245,-87,-74,-233,-234,-236,-222,245,-224,-86,-75,-232,-235,-68,-223,245,245,245,-225,-87,-74,-227,-228,245,-226,-229,245,245,-231,-230,]),'CONTINUE':([15,90,91,121,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,262,267,355,356,358,360,361,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[-71,-90,-72,-338,246,-339,246,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,246,-73,-69,-221,-220,246,246,-237,246,-87,-74,-233,-234,-236,-222,246,-224,-86,-75,-232,-235,-68,-223,246,246,246,-225,-87,-74,-227,-228,246,-226,-229,246,246,-231,-230,]),'RETURN':([15,90,91,121,128,208,221,222,223,224,225,226,227,228,229,230,231,232,242,262,267,355,356,358,360,361,369,370,371,374,375,377,469,470,471,479,480,483,484,499,524,525,526,527,552,553,554,555,556,570,579,580,582,584,585,586,],[-71,-90,-72,-338,247,-339,247,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,247,-73,-69,-221,-220,247,247,-237,247,-87,-74,-233,-234,-236,-222,247,-224,-86,-75,-232,-235,-68,-223,247,247,247,-225,-87,-74,-227,-228,247,-226,-229,247,247,-231,-230,]),'PLUSPLUS':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,139,141,143,147,148,149,150,152,153,154,155,156,158,159,160,161,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,233,234,238,242,247,256,257,258,259,262,263,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,291,292,294,298,300,301,302,303,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,406,429,431,432,433,434,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,501,502,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,561,562,563,565,570,572,574,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,153,-340,-27,-28,-185,-338,153,153,153,-340,-328,153,-306,-287,-288,-289,-286,291,153,153,153,153,-292,-315,-290,-291,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,153,-340,-28,153,-186,-339,153,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-315,153,153,153,153,-340,153,-340,-28,-73,-329,-69,153,153,153,153,153,153,153,153,153,153,153,153,153,153,153,153,153,153,153,153,153,-300,-301,153,153,-334,-335,-336,-337,-287,153,153,-340,153,153,-221,-220,153,153,-237,153,153,153,153,153,-87,-74,153,-233,-234,-236,153,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,153,-12,153,-287,153,153,153,-308,-295,-296,-297,-298,-299,-340,153,153,153,-222,153,-224,153,-86,-75,153,-232,-235,-340,-201,-340,-68,153,-293,-294,153,153,-340,-28,-287,-223,153,153,153,153,153,153,-11,-287,153,153,-225,-87,-74,-227,-228,153,-302,-340,-309,153,153,153,-303,-226,-229,153,153,-231,-230,]),'MINUSMINUS':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,139,141,143,147,148,149,150,152,153,154,155,156,158,159,160,161,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,233,234,238,242,247,256,257,258,259,262,263,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,291,292,294,298,300,301,302,303,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,406,429,431,432,433,434,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,501,502,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,561,562,563,565,570,572,574,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,154,-340,-27,-28,-185,-338,154,154,154,-340,-328,154,-306,-287,-288,-289,-286,292,154,154,154,154,-292,-315,-290,-291,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,154,-340,-28,154,-186,-339,154,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-315,154,154,154,154,-340,154,-340,-28,-73,-329,-69,154,154,154,154,154,154,154,154,154,154,154,154,154,154,154,154,154,154,154,154,154,-300,-301,154,154,-334,-335,-336,-337,-287,154,154,-340,154,154,-221,-220,154,154,-237,154,154,154,154,154,-87,-74,154,-233,-234,-236,154,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,154,-12,154,-287,154,154,154,-308,-295,-296,-297,-298,-299,-340,154,154,154,-222,154,-224,154,-86,-75,154,-232,-235,-340,-201,-340,-68,154,-293,-294,154,154,-340,-28,-287,-223,154,154,154,154,154,154,-11,-287,154,154,-225,-87,-74,-227,-228,154,-302,-340,-309,154,154,154,-303,-226,-229,154,154,-231,-230,]),'SIZEOF':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,156,-340,-27,-28,-185,-338,156,156,156,-340,156,-287,-288,-289,-286,156,156,156,156,-290,-291,156,-340,-28,156,-186,-339,156,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,156,156,156,156,-340,156,-340,-28,-73,-69,156,156,156,156,156,156,156,156,156,156,156,156,156,156,156,156,156,156,156,156,156,156,156,-287,156,156,-340,156,156,-221,-220,156,156,-237,156,156,156,156,156,-87,-74,156,-233,-234,-236,156,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,156,-12,156,-287,156,156,156,-340,156,156,156,-222,156,-224,156,-86,-75,156,-232,-235,-340,-201,-340,-68,156,156,156,-340,-28,-287,-223,156,156,156,156,156,156,-11,-287,156,156,-225,-87,-74,-227,-228,156,-340,156,156,156,-226,-229,156,156,-231,-230,]),'_ALIGNOF':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,157,-340,-27,-28,-185,-338,157,157,157,-340,157,-287,-288,-289,-286,157,157,157,157,-290,-291,157,-340,-28,157,-186,-339,157,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,157,157,157,157,-340,157,-340,-28,-73,-69,157,157,157,157,157,157,157,157,157,157,157,157,157,157,157,157,157,157,157,157,157,157,157,-287,157,157,-340,157,157,-221,-220,157,157,-237,157,157,157,157,157,-87,-74,157,-233,-234,-236,157,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,157,-12,157,-287,157,157,157,-340,157,157,157,-222,157,-224,157,-86,-75,157,-232,-235,-340,-201,-340,-68,157,157,157,-340,-28,-287,-223,157,157,157,157,157,157,-11,-287,157,157,-225,-87,-74,-227,-228,157,-340,157,157,157,-226,-229,157,157,-231,-230,]),'AND':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,139,141,143,145,146,147,148,149,150,151,152,153,154,155,156,158,159,160,161,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,233,234,238,242,247,250,256,257,258,259,262,263,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,291,292,293,294,295,296,297,298,300,301,302,303,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,497,498,499,500,501,502,503,505,506,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,561,562,563,565,570,572,574,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,150,-340,-27,-28,-185,-338,150,150,150,-340,-328,150,-306,282,-258,-287,-288,-289,-286,-277,-279,150,150,150,150,-292,-315,-290,-291,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,150,-340,-28,150,-186,-339,150,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-315,150,150,150,150,-277,-340,150,-340,-28,-73,-329,-69,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,150,-300,-301,-280,150,-281,-282,-283,150,-334,-335,-336,-337,-287,150,150,-340,150,150,-221,-220,150,150,-237,150,150,150,150,150,-87,-74,150,-233,-234,-236,150,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,150,-12,150,-287,150,150,150,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,282,282,282,282,-295,-296,-297,-298,-299,-340,150,150,150,-222,150,-224,150,-86,-75,150,-232,-235,-340,-201,-278,-340,-68,150,-293,-294,150,-284,-285,150,-340,-28,-287,-223,150,150,150,150,150,150,-11,-287,150,150,-225,-87,-74,-227,-228,150,-302,-340,-309,150,150,150,-303,-226,-229,150,150,-231,-230,]),'PLUS':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,139,141,143,145,146,147,148,149,150,151,152,153,154,155,156,158,159,160,161,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,233,234,238,242,247,250,256,257,258,259,262,263,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,291,292,293,294,295,296,297,298,300,301,302,303,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,497,498,499,500,501,502,503,505,506,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,561,562,563,565,570,572,574,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,148,-340,-27,-28,-185,-338,148,148,148,-340,-328,148,-306,272,-258,-287,-288,-289,-286,-277,-279,148,148,148,148,-292,-315,-290,-291,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,148,-340,-28,148,-186,-339,148,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-315,148,148,148,148,-277,-340,148,-340,-28,-73,-329,-69,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,148,-300,-301,-280,148,-281,-282,-283,148,-334,-335,-336,-337,-287,148,148,-340,148,148,-221,-220,148,148,-237,148,148,148,148,148,-87,-74,148,-233,-234,-236,148,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,148,-12,148,-287,148,148,148,-308,-259,-260,-261,-262,-263,272,272,272,272,272,272,272,272,272,272,272,272,272,-295,-296,-297,-298,-299,-340,148,148,148,-222,148,-224,148,-86,-75,148,-232,-235,-340,-201,-278,-340,-68,148,-293,-294,148,-284,-285,148,-340,-28,-287,-223,148,148,148,148,148,148,-11,-287,148,148,-225,-87,-74,-227,-228,148,-302,-340,-309,148,148,148,-303,-226,-229,148,148,-231,-230,]),'MINUS':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,139,141,143,145,146,147,148,149,150,151,152,153,154,155,156,158,159,160,161,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,233,234,238,242,247,250,256,257,258,259,262,263,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,291,292,293,294,295,296,297,298,300,301,302,303,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,497,498,499,500,501,502,503,505,506,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,561,562,563,565,570,572,574,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,149,-340,-27,-28,-185,-338,149,149,149,-340,-328,149,-306,273,-258,-287,-288,-289,-286,-277,-279,149,149,149,149,-292,-315,-290,-291,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,149,-340,-28,149,-186,-339,149,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,-315,149,149,149,149,-277,-340,149,-340,-28,-73,-329,-69,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,149,-300,-301,-280,149,-281,-282,-283,149,-334,-335,-336,-337,-287,149,149,-340,149,149,-221,-220,149,149,-237,149,149,149,149,149,-87,-74,149,-233,-234,-236,149,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,149,-12,149,-287,149,149,149,-308,-259,-260,-261,-262,-263,273,273,273,273,273,273,273,273,273,273,273,273,273,-295,-296,-297,-298,-299,-340,149,149,149,-222,149,-224,149,-86,-75,149,-232,-235,-340,-201,-278,-340,-68,149,-293,-294,149,-284,-285,149,-340,-28,-287,-223,149,149,149,149,149,149,-11,-287,149,149,-225,-87,-74,-227,-228,149,-302,-340,-309,149,149,149,-303,-226,-229,149,149,-231,-230,]),'NOT':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,160,-340,-27,-28,-185,-338,160,160,160,-340,160,-287,-288,-289,-286,160,160,160,160,-290,-291,160,-340,-28,160,-186,-339,160,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,160,160,160,160,-340,160,-340,-28,-73,-69,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,160,-287,160,160,-340,160,160,-221,-220,160,160,-237,160,160,160,160,160,-87,-74,160,-233,-234,-236,160,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,160,-12,160,-287,160,160,160,-340,160,160,160,-222,160,-224,160,-86,-75,160,-232,-235,-340,-201,-340,-68,160,160,160,-340,-28,-287,-223,160,160,160,160,160,160,-11,-287,160,160,-225,-87,-74,-227,-228,160,-340,160,160,160,-226,-229,160,160,-231,-230,]),'LNOT':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,161,-340,-27,-28,-185,-338,161,161,161,-340,161,-287,-288,-289,-286,161,161,161,161,-290,-291,161,-340,-28,161,-186,-339,161,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,161,161,161,161,-340,161,-340,-28,-73,-69,161,161,161,161,161,161,161,161,161,161,161,161,161,161,161,161,161,161,161,161,161,161,161,-287,161,161,-340,161,161,-221,-220,161,161,-237,161,161,161,161,161,-87,-74,161,-233,-234,-236,161,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,161,-12,161,-287,161,161,161,-340,161,161,161,-222,161,-224,161,-86,-75,161,-232,-235,-340,-201,-340,-68,161,161,161,-340,-28,-287,-223,161,161,161,161,161,161,-11,-287,161,161,-225,-87,-74,-227,-228,161,-340,161,161,161,-226,-229,161,161,-231,-230,]),'OFFSETOF':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,165,-340,-27,-28,-185,-338,165,165,165,-340,165,-287,-288,-289,-286,165,165,165,165,-290,-291,165,-340,-28,165,-186,-339,165,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,165,165,165,165,-340,165,-340,-28,-73,-69,165,165,165,165,165,165,165,165,165,165,165,165,165,165,165,165,165,165,165,165,165,165,165,-287,165,165,-340,165,165,-221,-220,165,165,-237,165,165,165,165,165,-87,-74,165,-233,-234,-236,165,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,165,-12,165,-287,165,165,165,-340,165,165,165,-222,165,-224,165,-86,-75,165,-232,-235,-340,-201,-340,-68,165,165,165,-340,-28,-287,-223,165,165,165,165,165,165,-11,-287,165,165,-225,-87,-74,-227,-228,165,-340,165,165,165,-226,-229,165,165,-231,-230,]),'INT_CONST_DEC':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,166,-340,-27,-28,-185,-338,166,166,166,-340,166,-287,-288,-289,-286,166,166,166,166,-290,-291,166,-340,-28,166,-186,-339,166,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,166,166,166,166,-340,166,-340,-28,-73,-69,166,166,166,166,166,166,166,166,166,166,166,166,166,166,166,166,166,166,166,166,166,166,166,-287,166,166,-340,166,166,-221,-220,166,166,-237,166,166,166,166,166,-87,-74,166,-233,-234,-236,166,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,166,-12,166,-287,166,166,166,-340,166,166,166,-222,166,-224,166,-86,-75,166,-232,-235,-340,-201,-340,-68,166,166,166,-340,-28,-287,-223,166,166,166,166,166,166,-11,-287,166,166,-225,-87,-74,-227,-228,166,-340,166,166,166,-226,-229,166,166,-231,-230,]),'INT_CONST_OCT':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,167,-340,-27,-28,-185,-338,167,167,167,-340,167,-287,-288,-289,-286,167,167,167,167,-290,-291,167,-340,-28,167,-186,-339,167,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,167,167,167,167,-340,167,-340,-28,-73,-69,167,167,167,167,167,167,167,167,167,167,167,167,167,167,167,167,167,167,167,167,167,167,167,-287,167,167,-340,167,167,-221,-220,167,167,-237,167,167,167,167,167,-87,-74,167,-233,-234,-236,167,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,167,-12,167,-287,167,167,167,-340,167,167,167,-222,167,-224,167,-86,-75,167,-232,-235,-340,-201,-340,-68,167,167,167,-340,-28,-287,-223,167,167,167,167,167,167,-11,-287,167,167,-225,-87,-74,-227,-228,167,-340,167,167,167,-226,-229,167,167,-231,-230,]),'INT_CONST_HEX':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,168,-340,-27,-28,-185,-338,168,168,168,-340,168,-287,-288,-289,-286,168,168,168,168,-290,-291,168,-340,-28,168,-186,-339,168,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,168,168,168,168,-340,168,-340,-28,-73,-69,168,168,168,168,168,168,168,168,168,168,168,168,168,168,168,168,168,168,168,168,168,168,168,-287,168,168,-340,168,168,-221,-220,168,168,-237,168,168,168,168,168,-87,-74,168,-233,-234,-236,168,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,168,-12,168,-287,168,168,168,-340,168,168,168,-222,168,-224,168,-86,-75,168,-232,-235,-340,-201,-340,-68,168,168,168,-340,-28,-287,-223,168,168,168,168,168,168,-11,-287,168,168,-225,-87,-74,-227,-228,168,-340,168,168,168,-226,-229,168,168,-231,-230,]),'INT_CONST_BIN':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,169,-340,-27,-28,-185,-338,169,169,169,-340,169,-287,-288,-289,-286,169,169,169,169,-290,-291,169,-340,-28,169,-186,-339,169,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,169,169,169,169,-340,169,-340,-28,-73,-69,169,169,169,169,169,169,169,169,169,169,169,169,169,169,169,169,169,169,169,169,169,169,169,-287,169,169,-340,169,169,-221,-220,169,169,-237,169,169,169,169,169,-87,-74,169,-233,-234,-236,169,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,169,-12,169,-287,169,169,169,-340,169,169,169,-222,169,-224,169,-86,-75,169,-232,-235,-340,-201,-340,-68,169,169,169,-340,-28,-287,-223,169,169,169,169,169,169,-11,-287,169,169,-225,-87,-74,-227,-228,169,-340,169,169,169,-226,-229,169,169,-231,-230,]),'INT_CONST_CHAR':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,170,-340,-27,-28,-185,-338,170,170,170,-340,170,-287,-288,-289,-286,170,170,170,170,-290,-291,170,-340,-28,170,-186,-339,170,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,170,170,170,170,-340,170,-340,-28,-73,-69,170,170,170,170,170,170,170,170,170,170,170,170,170,170,170,170,170,170,170,170,170,170,170,-287,170,170,-340,170,170,-221,-220,170,170,-237,170,170,170,170,170,-87,-74,170,-233,-234,-236,170,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,170,-12,170,-287,170,170,170,-340,170,170,170,-222,170,-224,170,-86,-75,170,-232,-235,-340,-201,-340,-68,170,170,170,-340,-28,-287,-223,170,170,170,170,170,170,-11,-287,170,170,-225,-87,-74,-227,-228,170,-340,170,170,170,-226,-229,170,170,-231,-230,]),'FLOAT_CONST':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,171,-340,-27,-28,-185,-338,171,171,171,-340,171,-287,-288,-289,-286,171,171,171,171,-290,-291,171,-340,-28,171,-186,-339,171,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,171,171,171,171,-340,171,-340,-28,-73,-69,171,171,171,171,171,171,171,171,171,171,171,171,171,171,171,171,171,171,171,171,171,171,171,-287,171,171,-340,171,171,-221,-220,171,171,-237,171,171,171,171,171,-87,-74,171,-233,-234,-236,171,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,171,-12,171,-287,171,171,171,-340,171,171,171,-222,171,-224,171,-86,-75,171,-232,-235,-340,-201,-340,-68,171,171,171,-340,-28,-287,-223,171,171,171,171,171,171,-11,-287,171,171,-225,-87,-74,-227,-228,171,-340,171,171,171,-226,-229,171,171,-231,-230,]),'HEX_FLOAT_CONST':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,172,-340,-27,-28,-185,-338,172,172,172,-340,172,-287,-288,-289,-286,172,172,172,172,-290,-291,172,-340,-28,172,-186,-339,172,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,172,172,172,172,-340,172,-340,-28,-73,-69,172,172,172,172,172,172,172,172,172,172,172,172,172,172,172,172,172,172,172,172,172,172,172,-287,172,172,-340,172,172,-221,-220,172,172,-237,172,172,172,172,172,-87,-74,172,-233,-234,-236,172,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,172,-12,172,-287,172,172,172,-340,172,172,172,-222,172,-224,172,-86,-75,172,-232,-235,-340,-201,-340,-68,172,172,172,-340,-28,-287,-223,172,172,172,172,172,172,-11,-287,172,172,-225,-87,-74,-227,-228,172,-340,172,172,172,-226,-229,172,172,-231,-230,]),'CHAR_CONST':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,173,-340,-27,-28,-185,-338,173,173,173,-340,173,-287,-288,-289,-286,173,173,173,173,-290,-291,173,-340,-28,173,-186,-339,173,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,173,173,173,173,-340,173,-340,-28,-73,-69,173,173,173,173,173,173,173,173,173,173,173,173,173,173,173,173,173,173,173,173,173,173,173,-287,173,173,-340,173,173,-221,-220,173,173,-237,173,173,173,173,173,-87,-74,173,-233,-234,-236,173,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,173,-12,173,-287,173,173,173,-340,173,173,173,-222,173,-224,173,-86,-75,173,-232,-235,-340,-201,-340,-68,173,173,173,-340,-28,-287,-223,173,173,173,173,173,173,-11,-287,173,173,-225,-87,-74,-227,-228,173,-340,173,173,173,-226,-229,173,173,-231,-230,]),'WCHAR_CONST':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,174,-340,-27,-28,-185,-338,174,174,174,-340,174,-287,-288,-289,-286,174,174,174,174,-290,-291,174,-340,-28,174,-186,-339,174,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,174,174,174,174,-340,174,-340,-28,-73,-69,174,174,174,174,174,174,174,174,174,174,174,174,174,174,174,174,174,174,174,174,174,174,174,-287,174,174,-340,174,174,-221,-220,174,174,-237,174,174,174,174,174,-87,-74,174,-233,-234,-236,174,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,174,-12,174,-287,174,174,174,-340,174,174,174,-222,174,-224,174,-86,-75,174,-232,-235,-340,-201,-340,-68,174,174,174,-340,-28,-287,-223,174,174,174,174,174,174,-11,-287,174,174,-225,-87,-74,-227,-228,174,-340,174,174,174,-226,-229,174,174,-231,-230,]),'U8CHAR_CONST':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,175,-340,-27,-28,-185,-338,175,175,175,-340,175,-287,-288,-289,-286,175,175,175,175,-290,-291,175,-340,-28,175,-186,-339,175,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,175,175,175,175,-340,175,-340,-28,-73,-69,175,175,175,175,175,175,175,175,175,175,175,175,175,175,175,175,175,175,175,175,175,175,175,-287,175,175,-340,175,175,-221,-220,175,175,-237,175,175,175,175,175,-87,-74,175,-233,-234,-236,175,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,175,-12,175,-287,175,175,175,-340,175,175,175,-222,175,-224,175,-86,-75,175,-232,-235,-340,-201,-340,-68,175,175,175,-340,-28,-287,-223,175,175,175,175,175,175,-11,-287,175,175,-225,-87,-74,-227,-228,175,-340,175,175,175,-226,-229,175,175,-231,-230,]),'U16CHAR_CONST':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,176,-340,-27,-28,-185,-338,176,176,176,-340,176,-287,-288,-289,-286,176,176,176,176,-290,-291,176,-340,-28,176,-186,-339,176,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,176,176,176,176,-340,176,-340,-28,-73,-69,176,176,176,176,176,176,176,176,176,176,176,176,176,176,176,176,176,176,176,176,176,176,176,-287,176,176,-340,176,176,-221,-220,176,176,-237,176,176,176,176,176,-87,-74,176,-233,-234,-236,176,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,176,-12,176,-287,176,176,176,-340,176,176,176,-222,176,-224,176,-86,-75,176,-232,-235,-340,-201,-340,-68,176,176,176,-340,-28,-287,-223,176,176,176,176,176,176,-11,-287,176,176,-225,-87,-74,-227,-228,176,-340,176,176,176,-226,-229,176,176,-231,-230,]),'U32CHAR_CONST':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,177,-340,-27,-28,-185,-338,177,177,177,-340,177,-287,-288,-289,-286,177,177,177,177,-290,-291,177,-340,-28,177,-186,-339,177,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,177,177,177,177,-340,177,-340,-28,-73,-69,177,177,177,177,177,177,177,177,177,177,177,177,177,177,177,177,177,177,177,177,177,177,177,-287,177,177,-340,177,177,-221,-220,177,177,-237,177,177,177,177,177,-87,-74,177,-233,-234,-236,177,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,177,-12,177,-287,177,177,177,-340,177,177,177,-222,177,-224,177,-86,-75,177,-232,-235,-340,-201,-340,-68,177,177,177,-340,-28,-287,-223,177,177,177,177,177,177,-11,-287,177,177,-225,-87,-74,-227,-228,177,-340,177,177,177,-226,-229,177,177,-231,-230,]),'STRING_LITERAL':([15,51,52,53,81,90,91,92,94,95,114,115,116,121,126,128,135,136,138,139,141,143,147,148,149,150,153,154,155,156,160,161,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,263,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,407,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,139,139,-340,-27,-28,-185,-338,139,139,139,-340,263,-328,139,263,-287,-288,-289,-286,139,139,139,139,-290,-291,139,-340,-28,139,-186,-339,139,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,139,139,139,139,-340,139,-340,-28,-73,-329,139,-69,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,139,-287,139,139,-340,139,139,-221,-220,139,139,-237,139,139,139,139,139,-87,-74,139,-233,-234,-236,139,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,139,-12,139,-287,139,139,139,263,-340,139,139,139,-222,139,-224,139,-86,-75,139,-232,-235,-340,-201,-340,-68,139,139,139,-340,-28,-287,-223,139,139,139,139,139,139,-11,-287,139,139,-225,-87,-74,-227,-228,139,-340,139,139,139,-226,-229,139,139,-231,-230,]),'WSTRING_LITERAL':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,164,178,179,180,181,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,300,301,302,303,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,178,-340,-27,-28,-185,-338,178,178,178,-340,178,-287,-288,-289,-286,178,178,178,178,-290,-291,300,-330,-331,-332,-333,178,-340,-28,178,-186,-339,178,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,178,178,178,178,-340,178,-340,-28,-73,-69,178,178,178,178,178,178,178,178,178,178,178,178,178,178,178,178,178,178,178,178,178,178,178,-334,-335,-336,-337,-287,178,178,-340,178,178,-221,-220,178,178,-237,178,178,178,178,178,-87,-74,178,-233,-234,-236,178,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,178,-12,178,-287,178,178,178,-340,178,178,178,-222,178,-224,178,-86,-75,178,-232,-235,-340,-201,-340,-68,178,178,178,-340,-28,-287,-223,178,178,178,178,178,178,-11,-287,178,178,-225,-87,-74,-227,-228,178,-340,178,178,178,-226,-229,178,178,-231,-230,]),'U8STRING_LITERAL':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,164,178,179,180,181,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,300,301,302,303,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,179,-340,-27,-28,-185,-338,179,179,179,-340,179,-287,-288,-289,-286,179,179,179,179,-290,-291,301,-330,-331,-332,-333,179,-340,-28,179,-186,-339,179,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,179,179,179,179,-340,179,-340,-28,-73,-69,179,179,179,179,179,179,179,179,179,179,179,179,179,179,179,179,179,179,179,179,179,179,179,-334,-335,-336,-337,-287,179,179,-340,179,179,-221,-220,179,179,-237,179,179,179,179,179,-87,-74,179,-233,-234,-236,179,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,179,-12,179,-287,179,179,179,-340,179,179,179,-222,179,-224,179,-86,-75,179,-232,-235,-340,-201,-340,-68,179,179,179,-340,-28,-287,-223,179,179,179,179,179,179,-11,-287,179,179,-225,-87,-74,-227,-228,179,-340,179,179,179,-226,-229,179,179,-231,-230,]),'U16STRING_LITERAL':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,164,178,179,180,181,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,300,301,302,303,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,180,-340,-27,-28,-185,-338,180,180,180,-340,180,-287,-288,-289,-286,180,180,180,180,-290,-291,302,-330,-331,-332,-333,180,-340,-28,180,-186,-339,180,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,180,180,180,180,-340,180,-340,-28,-73,-69,180,180,180,180,180,180,180,180,180,180,180,180,180,180,180,180,180,180,180,180,180,180,180,-334,-335,-336,-337,-287,180,180,-340,180,180,-221,-220,180,180,-237,180,180,180,180,180,-87,-74,180,-233,-234,-236,180,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,180,-12,180,-287,180,180,180,-340,180,180,180,-222,180,-224,180,-86,-75,180,-232,-235,-340,-201,-340,-68,180,180,180,-340,-28,-287,-223,180,180,180,180,180,180,-11,-287,180,180,-225,-87,-74,-227,-228,180,-340,180,180,180,-226,-229,180,180,-231,-230,]),'U32STRING_LITERAL':([15,51,52,53,81,90,91,94,95,114,115,116,121,126,128,135,136,141,147,148,149,150,153,154,155,156,160,161,164,178,179,180,181,182,183,184,195,197,208,221,222,223,224,225,226,227,228,229,230,231,232,234,238,242,247,256,257,258,259,262,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,300,301,302,303,306,309,310,323,332,347,355,356,358,360,361,362,365,366,367,369,370,371,372,374,375,377,378,379,380,381,382,383,384,385,386,387,388,389,392,393,394,397,400,401,402,405,448,455,457,467,469,470,471,474,479,480,482,483,484,487,489,498,499,500,503,510,511,512,520,524,525,526,527,528,529,532,533,543,544,545,552,553,554,555,556,559,562,565,570,572,579,580,582,584,585,586,],[-71,-131,-132,-133,-134,-90,-72,181,-340,-27,-28,-185,-338,181,181,181,-340,181,-287,-288,-289,-286,181,181,181,181,-290,-291,303,-330,-331,-332,-333,181,-340,-28,181,-186,-339,181,-219,-217,-218,-78,-79,-80,-81,-82,-83,-84,-85,181,181,181,181,-340,181,-340,-28,-73,-69,181,181,181,181,181,181,181,181,181,181,181,181,181,181,181,181,181,181,181,181,181,181,181,-334,-335,-336,-337,-287,181,181,-340,181,181,-221,-220,181,181,-237,181,181,181,181,181,-87,-74,181,-233,-234,-236,181,-244,-245,-246,-247,-248,-249,-250,-251,-252,-253,-254,-11,181,-12,181,-287,181,181,181,-340,181,181,181,-222,181,-224,181,-86,-75,181,-232,-235,-340,-201,-340,-68,181,181,181,-340,-28,-287,-223,181,181,181,181,181,181,-11,-287,181,181,-225,-87,-74,-227,-228,181,-340,181,181,181,-226,-229,181,181,-231,-230,]),'ELSE':([15,91,208,225,226,227,228,229,230,232,262,267,355,361,370,371,374,375,377,469,471,479,480,483,484,499,524,552,553,554,555,556,579,580,585,586,],[-71,-72,-339,-78,-79,-80,-81,-82,-83,-85,-73,-69,-221,-237,-87,-84,-233,-234,-236,-222,-224,-86,-84,-232,-235,-68,-223,-225,570,-84,-227,-228,-226,-229,-231,-230,]),'PPPRAGMASTR':([15,],[91,]),'EQUALS':([19,28,73,86,87,88,89,97,111,130,132,139,140,143,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,202,208,233,250,252,263,291,292,293,295,296,297,300,301,302,303,311,312,395,396,403,404,406,429,431,432,433,434,439,440,490,492,493,494,497,501,502,505,506,508,509,534,535,536,561,563,574,],[-52,-29,-181,135,-182,-54,-37,-53,195,-181,-55,-328,-30,-306,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,332,-339,-315,379,-38,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-35,-36,489,-202,-43,-44,-308,-295,-296,-297,-298,-299,-31,-34,-203,-205,-39,-42,-278,-293,-294,-284,-285,-32,-33,-204,-40,-41,-302,-309,-303,]),'COMMA':([19,24,25,28,29,30,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,51,52,53,54,55,56,57,58,59,60,73,74,75,76,77,78,81,84,85,86,87,88,89,97,104,106,108,110,111,113,114,115,116,118,119,122,123,130,132,139,140,142,143,144,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,187,189,190,191,192,196,197,200,201,202,206,208,212,214,216,233,239,248,249,250,252,253,254,255,263,265,291,292,293,295,296,297,300,301,302,303,311,312,315,316,317,318,319,320,321,324,325,326,327,328,329,330,331,334,336,337,340,341,342,344,345,346,348,349,350,352,353,354,376,391,403,404,406,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,428,429,430,431,432,433,434,438,439,440,444,445,446,447,459,460,461,462,463,464,468,472,473,475,476,477,485,486,488,493,494,497,501,502,505,506,508,509,515,516,518,522,523,531,535,536,537,538,539,546,547,548,549,550,551,557,560,561,563,566,567,574,576,577,578,],[-52,-128,-102,-29,-107,-340,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-181,-98,-99,-100,-101,-104,-134,134,-135,-137,-182,-54,-37,-53,-103,-129,194,-139,-141,-183,-27,-28,-185,-169,-170,-149,-150,-181,-55,-328,-30,266,-306,-255,-256,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,313,314,-189,-194,-340,-184,-186,331,-174,-179,-152,-339,-145,-147,-340,-315,365,-238,-242,-277,-38,-136,-138,-196,-329,365,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-35,-36,-191,-192,-193,-207,-56,-1,-2,-45,-209,-140,-142,331,331,-171,-175,-154,-156,-151,-143,-144,-148,466,-164,-166,-146,-130,-206,-207,-177,-178,365,487,-43,-44,-308,365,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,-273,-274,-275,-276,365,503,-295,-313,-296,-297,-298,-299,507,-31,-34,-190,-195,-57,-208,-172,-173,-176,-180,-153,-155,-168,365,-240,-239,365,365,-243,-197,-199,-39,-42,-278,-293,-294,-284,-285,-32,-33,-210,-216,-214,-165,-167,-198,-40,-41,562,-257,-314,-50,-51,-212,-211,-213,-215,365,-200,-302,-309,-46,-49,-303,365,-47,-48,]),'RPAREN':([19,24,25,28,29,30,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,51,52,53,54,55,56,57,58,59,60,74,75,76,77,78,81,88,89,93,96,97,104,106,113,114,115,116,118,119,122,123,132,133,137,138,139,140,142,143,144,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,185,186,187,188,189,190,191,192,196,197,206,208,212,214,215,216,217,218,239,248,249,250,252,260,261,263,264,265,288,291,292,293,295,296,297,300,301,302,303,311,312,315,316,317,318,319,320,321,322,324,325,330,334,336,337,340,341,342,348,349,350,351,352,353,354,355,357,363,364,403,404,406,407,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,428,429,430,431,432,433,434,435,436,437,439,440,443,444,445,446,447,449,450,451,452,453,454,458,459,460,463,464,472,473,475,476,477,485,493,494,497,501,502,505,506,508,509,513,514,515,516,518,521,535,536,538,539,540,541,546,547,548,549,550,551,557,559,561,563,566,567,572,573,574,575,577,578,581,583,],[-52,-128,-102,-29,-107,-340,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-98,-99,-100,-101,-104,-134,-54,-37,140,-340,-53,-103,-129,-183,-27,-28,-185,-169,-170,-149,-150,-55,252,-340,262,-328,-30,267,-306,-255,-256,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,311,312,-187,-17,-18,-189,-194,-340,-184,-186,-152,-339,-145,-147,349,-340,353,354,-14,-238,-242,-277,-38,403,404,-329,405,406,429,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-35,-36,-191,-192,-193,-207,-56,-1,-2,-340,-45,-209,-171,-154,-156,-151,-143,-144,-148,-146,-130,-206,-340,-207,-177,-178,-221,-13,473,474,-43,-44,-308,499,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,-273,-274,-275,-276,502,-295,-313,-296,-297,-298,-299,504,505,506,-31,-34,-188,-190,-195,-57,-208,-340,515,516,-207,-23,-24,-340,-172,-173,-153,-155,525,-240,-239,526,527,-243,-39,-42,-278,-293,-294,-284,-285,-32,-33,546,547,-210,-216,-214,551,-40,-41,-257,-314,563,-310,-50,-51,-212,-211,-213,-215,571,-340,-302,-309,-46,-49,-340,582,-303,-311,-47,-48,584,-312,]),'COLON':([19,24,28,31,32,33,35,38,39,40,41,42,43,44,45,46,47,48,49,51,52,53,81,87,88,89,97,106,118,119,122,123,130,132,139,140,143,144,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,206,208,209,212,214,233,235,248,249,250,252,263,291,292,293,295,296,297,300,301,302,303,311,312,330,334,336,337,340,341,342,346,348,349,353,354,359,403,404,406,408,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,439,440,459,460,463,464,466,473,475,485,493,494,497,501,502,505,506,508,509,535,536,538,561,563,574,],[-52,-128,-29,-125,-126,-127,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-131,-132,-133,-134,-182,-54,-37,-53,-129,-169,-170,-149,-150,-181,-55,-328,-30,-306,-255,-256,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-152,-339,347,-145,-147,358,360,-238,-242,-277,-38,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-35,-36,-171,-154,-156,-151,-143,-144,-148,467,-146,-130,-177,-178,470,-43,-44,-308,500,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,-273,-274,-275,-276,-295,-296,-297,-298,-299,-31,-34,-172,-173,-153,-155,347,-240,-239,-243,-39,-42,-278,-293,-294,-284,-285,-32,-33,-40,-41,-257,-302,-309,-303,]),'LBRACKET':([19,24,25,28,29,30,31,32,33,34,35,38,39,40,41,42,43,44,45,46,47,48,49,51,52,53,54,55,56,57,58,59,60,74,75,76,77,78,81,88,89,97,104,106,113,114,115,116,118,119,121,122,123,132,139,140,143,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,192,196,197,206,208,212,214,216,233,252,256,263,291,292,300,301,302,303,311,312,318,319,322,324,325,330,334,336,337,340,341,342,348,349,351,352,353,354,395,396,403,404,406,429,431,432,433,434,439,440,446,447,452,459,460,463,464,487,490,492,493,494,498,501,502,508,509,515,516,518,534,535,536,540,541,546,547,548,549,550,551,561,562,563,566,567,574,575,577,578,583,],[95,-128,-102,-29,-107,-340,-125,-126,-127,-129,-241,-113,-114,-115,-116,-117,-118,-119,-120,-121,-122,-123,-124,-131,-132,-133,-105,-106,-108,-109,-110,-111,-112,-98,-99,-100,-101,-104,-134,136,-37,95,-103,-129,-183,-27,-28,-185,-169,-170,-338,-149,-150,136,-328,-30,-306,287,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,323,-184,-186,-152,-339,-145,-147,323,-315,-38,397,-329,-300,-301,-334,-335,-336,-337,-35,-36,323,448,323,-45,457,-171,-154,-156,-151,-143,-144,-148,-146,-130,323,323,-177,-178,397,-202,-43,-44,-308,-295,-296,-297,-298,-299,-31,-34,448,457,323,-172,-173,-153,-155,397,-203,-205,-39,-42,397,-293,-294,-32,-33,-210,-216,-214,-204,-40,-41,565,-310,-50,-51,-212,-211,-213,-215,-302,397,-309,-46,-49,-303,-311,-47,-48,-312,]),'RBRACKET':([51,52,53,81,95,114,115,116,136,139,143,144,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,184,197,208,248,249,250,257,259,263,291,292,293,295,296,297,300,301,302,303,305,306,307,308,323,399,400,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,427,429,431,432,433,434,441,442,448,455,456,457,473,475,485,491,495,496,497,501,502,505,506,510,512,517,519,520,538,542,543,561,563,568,569,574,576,],[-131,-132,-133,-134,-340,-27,-28,-185,-340,-328,-306,-255,-256,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-340,-28,-186,-339,-238,-242,-277,-340,-28,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,439,440,-3,-4,-340,493,494,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,-273,-274,-275,-276,501,-295,-296,-297,-298,-299,508,509,-340,-340,518,-340,-240,-239,-243,534,535,536,-278,-293,-294,-284,-285,-340,-28,548,549,550,-257,566,567,-302,-309,577,578,-303,583,]),'PERIOD':([121,139,143,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,256,263,291,292,300,301,302,303,395,396,406,429,431,432,433,434,487,490,492,498,501,502,534,540,541,561,562,563,574,575,583,],[-338,-328,-306,289,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,398,-329,-300,-301,-334,-335,-336,-337,398,-202,-308,-295,-296,-297,-298,-299,398,-203,-205,398,-293,-294,-204,564,-310,-302,398,-309,-303,-311,-312,]),'ARROW':([139,143,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,263,291,292,300,301,302,303,406,429,431,432,433,434,501,502,561,563,574,],[-328,-306,290,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-329,-300,-301,-334,-335,-336,-337,-308,-295,-296,-297,-298,-299,-293,-294,-302,-309,-303,]),'CONDOP':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,268,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,-273,-274,-275,-276,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'DIVIDE':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,270,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,270,270,270,270,270,270,270,270,270,270,270,270,270,270,270,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'MOD':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,271,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,271,271,271,271,271,271,271,271,271,271,271,271,271,271,271,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'RSHIFT':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,274,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,274,274,274,274,274,274,274,274,274,274,274,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'LSHIFT':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,275,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,275,275,275,275,275,275,275,275,275,275,275,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'LT':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,276,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,276,276,276,276,276,276,276,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'LE':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,277,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,277,277,277,277,277,277,277,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'GE':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,278,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,278,278,278,278,278,278,278,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'GT':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,279,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,279,279,279,279,279,279,279,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'EQ':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,280,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,280,280,280,280,280,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'NE':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,281,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,281,281,281,281,281,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'OR':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,283,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,-273,-274,283,283,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'XOR':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,284,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,284,-274,284,284,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'LAND':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,285,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,-273,-274,-275,285,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'LOR':([139,143,145,146,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,286,-258,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,-277,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-259,-260,-261,-262,-263,-264,-265,-266,-267,-268,-269,-270,-271,-272,-273,-274,-275,-276,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'XOREQUAL':([139,143,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,380,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'TIMESEQUAL':([139,143,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,381,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'DIVEQUAL':([139,143,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,382,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'MODEQUAL':([139,143,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,383,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'PLUSEQUAL':([139,143,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,384,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'MINUSEQUAL':([139,143,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,385,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'LSHIFTEQUAL':([139,143,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,386,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'RSHIFTEQUAL':([139,143,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,387,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'ANDEQUAL':([139,143,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,388,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'OREQUAL':([139,143,151,152,158,159,162,163,164,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,208,233,250,263,291,292,293,295,296,297,300,301,302,303,406,429,431,432,433,434,497,501,502,505,506,561,563,574,],[-328,-306,-277,-279,-292,-315,-304,-305,-307,-316,-317,-318,-319,-320,-321,-322,-323,-324,-325,-326,-327,-330,-331,-332,-333,-339,-315,389,-329,-300,-301,-280,-281,-282,-283,-334,-335,-336,-337,-308,-295,-296,-297,-298,-299,-278,-293,-294,-284,-285,-302,-309,-303,]),'ELLIPSIS':([313,],[443,]),} + +_lr_action = {} +for _k, _v in _lr_action_items.items(): + for _x,_y in zip(_v[0],_v[1]): + if not _x in _lr_action: _lr_action[_x] = {} + _lr_action[_x][_k] = _y +del _lr_action_items + +_lr_goto_items = {'translation_unit_or_empty':([0,],[1,]),'translation_unit':([0,],[2,]),'empty':([0,11,12,21,22,23,26,27,30,34,69,70,71,73,95,96,101,128,136,137,182,183,192,209,216,221,242,256,257,258,322,323,351,358,360,369,372,448,449,455,457,458,470,482,487,498,510,511,525,526,527,529,559,562,570,572,582,584,],[3,66,83,99,99,99,107,99,114,99,83,107,99,66,114,188,99,220,114,188,307,114,320,343,320,357,357,392,307,114,453,114,453,357,357,357,357,114,188,307,307,453,357,357,533,533,307,114,357,357,357,357,357,533,357,357,357,357,]),'external_declaration':([0,2,],[4,64,]),'function_definition':([0,2,],[5,5,]),'declaration':([0,2,11,67,73,128,221,372,],[6,6,68,129,68,223,223,482,]),'pp_directive':([0,2,],[7,7,]),'pppragma_directive':([0,2,124,128,203,204,205,221,242,333,335,358,360,369,470,525,526,527,570,582,584,],[8,8,211,231,211,211,211,231,371,211,211,371,371,480,371,554,371,371,371,371,371,]),'static_assert':([0,2,128,221,242,358,360,369,470,525,526,527,570,582,584,],[10,10,232,232,232,232,232,232,232,232,232,232,232,232,232,]),'id_declarator':([0,2,12,17,26,69,70,82,134,192,194,209,322,466,],[11,11,73,93,111,130,111,93,130,315,130,130,93,130,]),'declaration_specifiers':([0,2,11,67,73,96,128,137,221,313,322,351,372,449,458,],[12,12,69,69,69,192,69,192,69,192,192,192,69,192,192,]),'decl_body':([0,2,11,67,73,128,221,372,],[13,13,13,13,13,13,13,13,]),'direct_id_declarator':([0,2,12,17,20,26,69,70,80,82,134,192,194,209,318,322,452,466,],[19,19,19,19,97,19,19,19,97,19,19,19,19,19,97,19,97,19,]),'pointer':([0,2,12,17,26,69,70,82,113,134,192,194,209,216,322,351,466,],[20,20,80,20,20,80,20,80,196,80,318,80,80,352,452,352,80,]),'type_qualifier':([0,2,11,12,21,22,23,27,30,34,67,69,71,73,95,96,101,115,124,125,126,128,136,137,141,183,184,192,203,204,205,209,213,216,221,238,258,259,294,298,299,304,313,322,323,333,335,351,372,448,449,458,511,512,],[21,21,21,74,21,21,21,21,116,21,21,74,21,21,116,21,21,197,116,116,116,21,116,21,116,116,197,74,116,116,116,341,197,341,21,116,116,197,116,116,116,116,21,21,116,116,116,21,21,116,21,21,116,197,]),'storage_class_specifier':([0,2,11,12,21,22,23,27,34,67,69,71,73,96,101,128,137,192,221,313,322,351,372,449,458,],[22,22,22,75,22,22,22,22,22,22,75,22,22,22,22,22,22,75,22,22,22,22,22,22,22,]),'function_specifier':([0,2,11,12,21,22,23,27,34,67,69,71,73,96,101,128,137,192,221,313,322,351,372,449,458,],[23,23,23,76,23,23,23,23,23,23,76,23,23,23,23,23,23,76,23,23,23,23,23,23,23,]),'type_specifier_no_typeid':([0,2,11,12,26,67,69,70,73,96,124,125,126,128,137,141,192,193,203,204,205,209,213,216,221,238,294,298,299,304,313,322,333,335,351,372,449,458,],[24,24,24,77,24,24,77,24,24,24,24,24,24,24,24,24,77,24,24,24,24,340,24,340,24,24,24,24,24,24,24,24,24,24,24,24,24,24,]),'type_specifier':([0,2,11,26,67,70,73,96,124,125,126,128,137,141,193,203,204,205,213,221,238,294,298,299,304,313,322,333,335,351,372,449,458,],[25,25,25,104,25,104,25,25,212,212,212,25,25,212,104,212,212,212,348,25,212,212,212,212,212,25,25,212,212,25,25,25,25,]),'declaration_specifiers_no_type':([0,2,11,21,22,23,27,34,67,71,73,96,101,128,137,221,313,322,351,372,449,458,],[26,26,70,100,100,100,100,100,70,100,70,193,100,70,193,70,193,193,193,70,193,193,]),'alignment_specifier':([0,2,11,12,21,22,23,27,34,67,69,71,73,96,101,124,125,126,128,137,141,192,203,204,205,209,216,221,238,294,298,299,304,313,322,333,335,351,372,449,458,],[27,27,27,78,27,27,27,27,27,27,78,27,27,27,27,214,214,214,27,27,214,78,214,214,214,342,342,27,214,214,214,214,214,27,27,214,214,27,27,27,27,]),'typedef_name':([0,2,11,26,67,70,73,96,124,125,126,128,137,141,193,203,204,205,213,221,238,294,298,299,304,313,322,333,335,351,372,449,458,],[31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,31,]),'enum_specifier':([0,2,11,26,67,70,73,96,124,125,126,128,137,141,193,203,204,205,213,221,238,294,298,299,304,313,322,333,335,351,372,449,458,],[32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,32,]),'struct_or_union_specifier':([0,2,11,26,67,70,73,96,124,125,126,128,137,141,193,203,204,205,213,221,238,294,298,299,304,313,322,333,335,351,372,449,458,],[33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,33,]),'atomic_specifier':([0,2,11,21,22,23,26,27,34,67,70,71,73,96,101,124,125,126,128,137,141,193,203,204,205,213,221,238,294,298,299,304,313,322,333,335,351,372,449,458,],[34,34,71,101,101,101,106,101,101,71,106,101,71,34,101,106,106,106,71,34,106,106,106,106,106,106,71,106,106,106,106,106,34,34,106,106,34,71,34,34,]),'struct_or_union':([0,2,11,26,67,70,73,96,124,125,126,128,137,141,193,203,204,205,213,221,238,294,298,299,304,313,322,333,335,351,372,449,458,],[37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,37,]),'declaration_list_opt':([11,73,],[65,131,]),'declaration_list':([11,73,],[67,67,]),'init_declarator_list_opt':([12,69,],[79,79,]),'init_declarator_list':([12,69,],[84,84,]),'init_declarator':([12,69,134,194,],[85,85,253,326,]),'declarator':([12,69,134,194,209,466,],[86,86,86,86,346,346,]),'typeid_declarator':([12,69,82,134,194,209,466,],[87,87,133,87,87,87,87,]),'direct_typeid_declarator':([12,69,80,82,134,194,209,466,],[88,88,132,88,88,88,88,88,]),'declaration_specifiers_no_type_opt':([21,22,23,27,34,71,101,],[98,102,103,112,117,117,117,]),'id_init_declarator_list_opt':([26,70,],[105,105,]),'id_init_declarator_list':([26,70,],[108,108,]),'id_init_declarator':([26,70,],[110,110,]),'type_qualifier_list_opt':([30,95,136,183,258,323,448,511,],[113,182,257,309,401,455,510,544,]),'type_qualifier_list':([30,95,124,125,126,136,141,183,203,204,205,238,258,294,298,299,304,323,333,335,448,511,],[115,184,213,213,213,259,213,115,213,213,213,213,115,213,213,213,213,115,213,213,512,115,]),'brace_open':([36,37,65,118,119,122,123,128,131,135,195,221,238,242,358,360,369,393,405,470,474,504,505,525,526,527,532,570,582,584,],[120,124,128,198,199,203,204,128,128,256,256,128,128,128,128,128,128,256,498,128,498,498,498,128,128,128,256,128,128,128,]),'compound_statement':([65,128,131,221,238,242,358,360,369,470,525,526,527,570,582,584,],[127,227,251,227,363,227,227,227,227,227,227,227,227,227,227,227,]),'unified_string_literal':([92,94,126,128,135,141,153,154,155,156,182,195,221,234,238,242,247,257,266,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,309,310,332,347,358,360,362,365,366,367,369,372,378,393,397,401,402,405,455,457,467,470,474,482,500,503,510,525,526,527,528,529,532,544,545,559,565,570,572,582,584,],[138,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,407,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,143,]),'constant_expression':([94,126,234,332,347,397,467,],[142,218,359,462,468,491,523,]),'conditional_expression':([94,126,128,135,141,182,195,221,234,238,242,247,257,268,287,288,294,298,309,310,332,347,358,360,362,365,366,367,369,372,378,393,397,401,402,455,457,467,470,482,500,503,510,525,526,527,528,529,532,544,545,559,565,570,572,582,584,],[144,144,249,249,249,249,249,249,144,249,249,249,249,249,249,249,249,249,249,249,144,144,249,249,249,249,249,249,249,249,249,249,144,249,249,249,249,144,249,249,538,249,249,249,249,249,249,249,249,249,249,249,249,249,249,249,249,]),'binary_expression':([94,126,128,135,141,182,195,221,234,238,242,247,257,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,309,310,332,347,358,360,362,365,366,367,369,372,378,393,397,401,402,455,457,467,470,482,500,503,510,525,526,527,528,529,532,544,545,559,565,570,572,582,584,],[145,145,145,145,145,145,145,145,145,145,145,145,145,145,409,410,411,412,413,414,415,416,417,418,419,420,421,422,423,424,425,426,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,145,]),'cast_expression':([94,126,128,135,141,155,182,195,221,234,238,242,247,257,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,309,310,332,347,358,360,362,365,366,367,369,372,378,393,397,401,402,405,455,457,467,470,474,482,500,503,510,525,526,527,528,529,532,544,545,559,565,570,572,582,584,],[146,146,146,146,146,296,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,497,146,146,146,146,497,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,146,]),'unary_expression':([94,126,128,135,141,153,154,155,156,182,195,221,234,238,242,247,257,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,309,310,332,347,358,360,362,365,366,367,369,372,378,393,397,401,402,405,455,457,467,470,474,482,500,503,510,525,526,527,528,529,532,544,545,559,565,570,572,582,584,],[151,151,250,250,250,293,295,151,297,250,250,250,151,250,250,250,250,250,151,151,151,151,151,151,151,151,151,151,151,151,151,151,151,151,151,151,250,250,250,250,250,250,151,151,250,250,250,250,250,250,250,250,250,250,151,250,250,151,250,250,151,250,151,250,151,250,250,250,250,250,250,250,250,250,250,250,250,250,250,250,250,]),'postfix_expression':([94,126,128,135,141,153,154,155,156,182,195,221,234,238,242,247,257,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,309,310,332,347,358,360,362,365,366,367,369,372,378,393,397,401,402,405,455,457,467,470,474,482,500,503,510,525,526,527,528,529,532,544,545,559,565,570,572,582,584,],[152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,152,]),'unary_operator':([94,126,128,135,141,153,154,155,156,182,195,221,234,238,242,247,257,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,309,310,332,347,358,360,362,365,366,367,369,372,378,393,397,401,402,405,455,457,467,470,474,482,500,503,510,525,526,527,528,529,532,544,545,559,565,570,572,582,584,],[155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,155,]),'primary_expression':([94,126,128,135,141,153,154,155,156,182,195,221,234,238,242,247,257,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,309,310,332,347,358,360,362,365,366,367,369,372,378,393,397,401,402,405,455,457,467,470,474,482,500,503,510,525,526,527,528,529,532,544,545,559,565,570,572,582,584,],[158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,158,]),'identifier':([94,96,126,128,135,137,141,153,154,155,156,182,195,221,234,238,242,247,257,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,309,310,314,332,347,358,360,362,365,366,367,369,372,378,393,397,398,401,402,405,449,455,457,467,470,474,482,500,503,507,510,525,526,527,528,529,532,544,545,559,564,565,570,572,582,584,],[162,191,162,162,162,191,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,162,445,162,162,162,162,162,162,162,162,162,162,162,162,162,492,162,162,162,191,162,162,162,162,162,162,162,162,541,162,162,162,162,162,162,162,162,162,162,575,162,162,162,162,162,]),'constant':([94,126,128,135,141,153,154,155,156,182,195,221,234,238,242,247,257,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,309,310,332,347,358,360,362,365,366,367,369,372,378,393,397,401,402,405,455,457,467,470,474,482,500,503,510,525,526,527,528,529,532,544,545,559,565,570,572,582,584,],[163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,163,]),'unified_wstring_literal':([94,126,128,135,141,153,154,155,156,182,195,221,234,238,242,247,257,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,294,298,309,310,332,347,358,360,362,365,366,367,369,372,378,393,397,401,402,405,455,457,467,470,474,482,500,503,510,525,526,527,528,529,532,544,545,559,565,570,572,582,584,],[164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,164,]),'parameter_type_list':([96,137,322,351,449,458,],[185,260,454,454,513,454,]),'identifier_list_opt':([96,137,449,],[186,261,514,]),'parameter_list':([96,137,322,351,449,458,],[187,187,187,187,187,187,]),'identifier_list':([96,137,449,],[189,189,189,]),'parameter_declaration':([96,137,313,322,351,449,458,],[190,190,444,190,190,190,190,]),'enumerator_list':([120,198,199,],[200,328,329,]),'enumerator':([120,198,199,331,],[201,201,201,461,]),'struct_declaration_list':([124,203,204,],[205,333,335,]),'brace_close':([124,200,203,204,205,219,328,329,333,335,390,487,537,562,],[206,330,334,336,337,355,459,460,463,464,486,531,561,574,]),'struct_declaration':([124,203,204,205,333,335,],[207,207,207,338,338,338,]),'specifier_qualifier_list':([124,125,126,141,203,204,205,238,294,298,299,304,333,335,],[209,216,216,216,209,209,209,216,216,216,216,216,209,209,]),'type_name':([125,126,141,238,294,298,299,304,],[215,217,264,364,435,436,437,438,]),'block_item_list_opt':([128,],[219,]),'block_item_list':([128,],[221,]),'block_item':([128,221,],[222,356,]),'statement':([128,221,242,358,360,369,470,525,526,527,570,582,584,],[224,224,370,370,370,479,370,553,370,370,370,370,370,]),'labeled_statement':([128,221,242,358,360,369,470,525,526,527,570,582,584,],[225,225,225,225,225,225,225,225,225,225,225,225,225,]),'expression_statement':([128,221,242,358,360,369,470,525,526,527,570,582,584,],[226,226,226,226,226,226,226,226,226,226,226,226,226,]),'selection_statement':([128,221,242,358,360,369,470,525,526,527,570,582,584,],[228,228,228,228,228,228,228,228,228,228,228,228,228,]),'iteration_statement':([128,221,242,358,360,369,470,525,526,527,570,582,584,],[229,229,229,229,229,229,229,229,229,229,229,229,229,]),'jump_statement':([128,221,242,358,360,369,470,525,526,527,570,582,584,],[230,230,230,230,230,230,230,230,230,230,230,230,230,]),'expression_opt':([128,221,242,358,360,369,372,470,482,525,526,527,529,559,570,572,582,584,],[236,236,236,236,236,236,481,236,530,236,236,236,558,573,236,581,236,236,]),'expression':([128,141,221,238,242,247,268,287,294,298,358,360,362,366,367,369,372,470,482,525,526,527,528,529,559,565,570,572,582,584,],[239,265,239,265,239,376,408,427,265,265,239,239,472,476,477,239,239,239,239,239,239,239,557,239,239,576,239,239,239,239,]),'assignment_expression':([128,135,141,182,195,221,238,242,247,257,268,287,288,294,298,309,310,358,360,362,365,366,367,369,372,378,393,401,402,455,457,470,482,503,510,525,526,527,528,529,532,544,545,559,565,570,572,582,584,],[248,255,248,308,255,248,248,248,248,308,248,248,430,248,248,441,442,248,248,248,475,248,248,248,248,485,255,495,496,308,308,248,248,539,308,248,248,248,248,248,255,568,569,248,248,248,248,248,248,]),'initializer':([135,195,393,532,],[254,327,488,560,]),'assignment_expression_opt':([182,257,455,457,510,],[305,399,517,519,542,]),'typeid_noparen_declarator':([192,],[316,]),'abstract_declarator_opt':([192,216,],[317,350,]),'direct_typeid_noparen_declarator':([192,318,],[319,446,]),'abstract_declarator':([192,216,322,351,],[321,321,450,450,]),'direct_abstract_declarator':([192,216,318,322,351,352,452,],[325,325,447,325,325,447,447,]),'struct_declarator_list_opt':([209,],[339,]),'struct_declarator_list':([209,],[344,]),'struct_declarator':([209,466,],[345,522,]),'pragmacomp_or_statement':([242,358,360,470,525,526,527,570,582,584,],[368,469,471,524,552,555,556,579,585,586,]),'pppragma_directive_list':([242,358,360,470,525,526,527,570,582,584,],[369,369,369,369,369,369,369,369,369,369,]),'assignment_operator':([250,],[378,]),'initializer_list_opt':([256,],[390,]),'initializer_list':([256,498,],[391,537,]),'designation_opt':([256,487,498,562,],[393,532,393,532,]),'designation':([256,487,498,562,],[394,394,394,394,]),'designator_list':([256,487,498,562,],[395,395,395,395,]),'designator':([256,395,487,498,562,],[396,490,396,396,396,]),'argument_expression_list':([288,],[428,]),'parameter_type_list_opt':([322,351,458,],[451,451,521,]),'offsetof_member_designator':([507,],[540,]),} + +_lr_goto = {} +for _k, _v in _lr_goto_items.items(): + for _x, _y in zip(_v[0], _v[1]): + if not _x in _lr_goto: _lr_goto[_x] = {} + _lr_goto[_x][_k] = _y +del _lr_goto_items +_lr_productions = [ + ("S' -> translation_unit_or_empty","S'",1,None,None,None), + ('abstract_declarator_opt -> empty','abstract_declarator_opt',1,'p_abstract_declarator_opt','plyparser.py',43), + ('abstract_declarator_opt -> abstract_declarator','abstract_declarator_opt',1,'p_abstract_declarator_opt','plyparser.py',44), + ('assignment_expression_opt -> empty','assignment_expression_opt',1,'p_assignment_expression_opt','plyparser.py',43), + ('assignment_expression_opt -> assignment_expression','assignment_expression_opt',1,'p_assignment_expression_opt','plyparser.py',44), + ('block_item_list_opt -> empty','block_item_list_opt',1,'p_block_item_list_opt','plyparser.py',43), + ('block_item_list_opt -> block_item_list','block_item_list_opt',1,'p_block_item_list_opt','plyparser.py',44), + ('declaration_list_opt -> empty','declaration_list_opt',1,'p_declaration_list_opt','plyparser.py',43), + ('declaration_list_opt -> declaration_list','declaration_list_opt',1,'p_declaration_list_opt','plyparser.py',44), + ('declaration_specifiers_no_type_opt -> empty','declaration_specifiers_no_type_opt',1,'p_declaration_specifiers_no_type_opt','plyparser.py',43), + ('declaration_specifiers_no_type_opt -> declaration_specifiers_no_type','declaration_specifiers_no_type_opt',1,'p_declaration_specifiers_no_type_opt','plyparser.py',44), + ('designation_opt -> empty','designation_opt',1,'p_designation_opt','plyparser.py',43), + ('designation_opt -> designation','designation_opt',1,'p_designation_opt','plyparser.py',44), + ('expression_opt -> empty','expression_opt',1,'p_expression_opt','plyparser.py',43), + ('expression_opt -> expression','expression_opt',1,'p_expression_opt','plyparser.py',44), + ('id_init_declarator_list_opt -> empty','id_init_declarator_list_opt',1,'p_id_init_declarator_list_opt','plyparser.py',43), + ('id_init_declarator_list_opt -> id_init_declarator_list','id_init_declarator_list_opt',1,'p_id_init_declarator_list_opt','plyparser.py',44), + ('identifier_list_opt -> empty','identifier_list_opt',1,'p_identifier_list_opt','plyparser.py',43), + ('identifier_list_opt -> identifier_list','identifier_list_opt',1,'p_identifier_list_opt','plyparser.py',44), + ('init_declarator_list_opt -> empty','init_declarator_list_opt',1,'p_init_declarator_list_opt','plyparser.py',43), + ('init_declarator_list_opt -> init_declarator_list','init_declarator_list_opt',1,'p_init_declarator_list_opt','plyparser.py',44), + ('initializer_list_opt -> empty','initializer_list_opt',1,'p_initializer_list_opt','plyparser.py',43), + ('initializer_list_opt -> initializer_list','initializer_list_opt',1,'p_initializer_list_opt','plyparser.py',44), + ('parameter_type_list_opt -> empty','parameter_type_list_opt',1,'p_parameter_type_list_opt','plyparser.py',43), + ('parameter_type_list_opt -> parameter_type_list','parameter_type_list_opt',1,'p_parameter_type_list_opt','plyparser.py',44), + ('struct_declarator_list_opt -> empty','struct_declarator_list_opt',1,'p_struct_declarator_list_opt','plyparser.py',43), + ('struct_declarator_list_opt -> struct_declarator_list','struct_declarator_list_opt',1,'p_struct_declarator_list_opt','plyparser.py',44), + ('type_qualifier_list_opt -> empty','type_qualifier_list_opt',1,'p_type_qualifier_list_opt','plyparser.py',43), + ('type_qualifier_list_opt -> type_qualifier_list','type_qualifier_list_opt',1,'p_type_qualifier_list_opt','plyparser.py',44), + ('direct_id_declarator -> ID','direct_id_declarator',1,'p_direct_id_declarator_1','plyparser.py',126), + ('direct_id_declarator -> LPAREN id_declarator RPAREN','direct_id_declarator',3,'p_direct_id_declarator_2','plyparser.py',126), + ('direct_id_declarator -> direct_id_declarator LBRACKET type_qualifier_list_opt assignment_expression_opt RBRACKET','direct_id_declarator',5,'p_direct_id_declarator_3','plyparser.py',126), + ('direct_id_declarator -> direct_id_declarator LBRACKET STATIC type_qualifier_list_opt assignment_expression RBRACKET','direct_id_declarator',6,'p_direct_id_declarator_4','plyparser.py',126), + ('direct_id_declarator -> direct_id_declarator LBRACKET type_qualifier_list STATIC assignment_expression RBRACKET','direct_id_declarator',6,'p_direct_id_declarator_4','plyparser.py',127), + ('direct_id_declarator -> direct_id_declarator LBRACKET type_qualifier_list_opt TIMES RBRACKET','direct_id_declarator',5,'p_direct_id_declarator_5','plyparser.py',126), + ('direct_id_declarator -> direct_id_declarator LPAREN parameter_type_list RPAREN','direct_id_declarator',4,'p_direct_id_declarator_6','plyparser.py',126), + ('direct_id_declarator -> direct_id_declarator LPAREN identifier_list_opt RPAREN','direct_id_declarator',4,'p_direct_id_declarator_6','plyparser.py',127), + ('direct_typeid_declarator -> TYPEID','direct_typeid_declarator',1,'p_direct_typeid_declarator_1','plyparser.py',126), + ('direct_typeid_declarator -> LPAREN typeid_declarator RPAREN','direct_typeid_declarator',3,'p_direct_typeid_declarator_2','plyparser.py',126), + ('direct_typeid_declarator -> direct_typeid_declarator LBRACKET type_qualifier_list_opt assignment_expression_opt RBRACKET','direct_typeid_declarator',5,'p_direct_typeid_declarator_3','plyparser.py',126), + ('direct_typeid_declarator -> direct_typeid_declarator LBRACKET STATIC type_qualifier_list_opt assignment_expression RBRACKET','direct_typeid_declarator',6,'p_direct_typeid_declarator_4','plyparser.py',126), + ('direct_typeid_declarator -> direct_typeid_declarator LBRACKET type_qualifier_list STATIC assignment_expression RBRACKET','direct_typeid_declarator',6,'p_direct_typeid_declarator_4','plyparser.py',127), + ('direct_typeid_declarator -> direct_typeid_declarator LBRACKET type_qualifier_list_opt TIMES RBRACKET','direct_typeid_declarator',5,'p_direct_typeid_declarator_5','plyparser.py',126), + ('direct_typeid_declarator -> direct_typeid_declarator LPAREN parameter_type_list RPAREN','direct_typeid_declarator',4,'p_direct_typeid_declarator_6','plyparser.py',126), + ('direct_typeid_declarator -> direct_typeid_declarator LPAREN identifier_list_opt RPAREN','direct_typeid_declarator',4,'p_direct_typeid_declarator_6','plyparser.py',127), + ('direct_typeid_noparen_declarator -> TYPEID','direct_typeid_noparen_declarator',1,'p_direct_typeid_noparen_declarator_1','plyparser.py',126), + ('direct_typeid_noparen_declarator -> direct_typeid_noparen_declarator LBRACKET type_qualifier_list_opt assignment_expression_opt RBRACKET','direct_typeid_noparen_declarator',5,'p_direct_typeid_noparen_declarator_3','plyparser.py',126), + ('direct_typeid_noparen_declarator -> direct_typeid_noparen_declarator LBRACKET STATIC type_qualifier_list_opt assignment_expression RBRACKET','direct_typeid_noparen_declarator',6,'p_direct_typeid_noparen_declarator_4','plyparser.py',126), + ('direct_typeid_noparen_declarator -> direct_typeid_noparen_declarator LBRACKET type_qualifier_list STATIC assignment_expression RBRACKET','direct_typeid_noparen_declarator',6,'p_direct_typeid_noparen_declarator_4','plyparser.py',127), + ('direct_typeid_noparen_declarator -> direct_typeid_noparen_declarator LBRACKET type_qualifier_list_opt TIMES RBRACKET','direct_typeid_noparen_declarator',5,'p_direct_typeid_noparen_declarator_5','plyparser.py',126), + ('direct_typeid_noparen_declarator -> direct_typeid_noparen_declarator LPAREN parameter_type_list RPAREN','direct_typeid_noparen_declarator',4,'p_direct_typeid_noparen_declarator_6','plyparser.py',126), + ('direct_typeid_noparen_declarator -> direct_typeid_noparen_declarator LPAREN identifier_list_opt RPAREN','direct_typeid_noparen_declarator',4,'p_direct_typeid_noparen_declarator_6','plyparser.py',127), + ('id_declarator -> direct_id_declarator','id_declarator',1,'p_id_declarator_1','plyparser.py',126), + ('id_declarator -> pointer direct_id_declarator','id_declarator',2,'p_id_declarator_2','plyparser.py',126), + ('typeid_declarator -> direct_typeid_declarator','typeid_declarator',1,'p_typeid_declarator_1','plyparser.py',126), + ('typeid_declarator -> pointer direct_typeid_declarator','typeid_declarator',2,'p_typeid_declarator_2','plyparser.py',126), + ('typeid_noparen_declarator -> direct_typeid_noparen_declarator','typeid_noparen_declarator',1,'p_typeid_noparen_declarator_1','plyparser.py',126), + ('typeid_noparen_declarator -> pointer direct_typeid_noparen_declarator','typeid_noparen_declarator',2,'p_typeid_noparen_declarator_2','plyparser.py',126), + ('translation_unit_or_empty -> translation_unit','translation_unit_or_empty',1,'p_translation_unit_or_empty','c_parser.py',509), + ('translation_unit_or_empty -> empty','translation_unit_or_empty',1,'p_translation_unit_or_empty','c_parser.py',510), + ('translation_unit -> external_declaration','translation_unit',1,'p_translation_unit_1','c_parser.py',518), + ('translation_unit -> translation_unit external_declaration','translation_unit',2,'p_translation_unit_2','c_parser.py',524), + ('external_declaration -> function_definition','external_declaration',1,'p_external_declaration_1','c_parser.py',534), + ('external_declaration -> declaration','external_declaration',1,'p_external_declaration_2','c_parser.py',539), + ('external_declaration -> pp_directive','external_declaration',1,'p_external_declaration_3','c_parser.py',544), + ('external_declaration -> pppragma_directive','external_declaration',1,'p_external_declaration_3','c_parser.py',545), + ('external_declaration -> SEMI','external_declaration',1,'p_external_declaration_4','c_parser.py',550), + ('external_declaration -> static_assert','external_declaration',1,'p_external_declaration_5','c_parser.py',555), + ('static_assert -> _STATIC_ASSERT LPAREN constant_expression COMMA unified_string_literal RPAREN','static_assert',6,'p_static_assert_declaration','c_parser.py',560), + ('static_assert -> _STATIC_ASSERT LPAREN constant_expression RPAREN','static_assert',4,'p_static_assert_declaration','c_parser.py',561), + ('pp_directive -> PPHASH','pp_directive',1,'p_pp_directive','c_parser.py',569), + ('pppragma_directive -> PPPRAGMA','pppragma_directive',1,'p_pppragma_directive','c_parser.py',580), + ('pppragma_directive -> PPPRAGMA PPPRAGMASTR','pppragma_directive',2,'p_pppragma_directive','c_parser.py',581), + ('pppragma_directive -> _PRAGMA LPAREN unified_string_literal RPAREN','pppragma_directive',4,'p_pppragma_directive','c_parser.py',582), + ('pppragma_directive_list -> pppragma_directive','pppragma_directive_list',1,'p_pppragma_directive_list','c_parser.py',592), + ('pppragma_directive_list -> pppragma_directive_list pppragma_directive','pppragma_directive_list',2,'p_pppragma_directive_list','c_parser.py',593), + ('function_definition -> id_declarator declaration_list_opt compound_statement','function_definition',3,'p_function_definition_1','c_parser.py',600), + ('function_definition -> declaration_specifiers id_declarator declaration_list_opt compound_statement','function_definition',4,'p_function_definition_2','c_parser.py',618), + ('statement -> labeled_statement','statement',1,'p_statement','c_parser.py',633), + ('statement -> expression_statement','statement',1,'p_statement','c_parser.py',634), + ('statement -> compound_statement','statement',1,'p_statement','c_parser.py',635), + ('statement -> selection_statement','statement',1,'p_statement','c_parser.py',636), + ('statement -> iteration_statement','statement',1,'p_statement','c_parser.py',637), + ('statement -> jump_statement','statement',1,'p_statement','c_parser.py',638), + ('statement -> pppragma_directive','statement',1,'p_statement','c_parser.py',639), + ('statement -> static_assert','statement',1,'p_statement','c_parser.py',640), + ('pragmacomp_or_statement -> pppragma_directive_list statement','pragmacomp_or_statement',2,'p_pragmacomp_or_statement','c_parser.py',688), + ('pragmacomp_or_statement -> statement','pragmacomp_or_statement',1,'p_pragmacomp_or_statement','c_parser.py',689), + ('decl_body -> declaration_specifiers init_declarator_list_opt','decl_body',2,'p_decl_body','c_parser.py',708), + ('decl_body -> declaration_specifiers_no_type id_init_declarator_list_opt','decl_body',2,'p_decl_body','c_parser.py',709), + ('declaration -> decl_body SEMI','declaration',2,'p_declaration','c_parser.py',769), + ('declaration_list -> declaration','declaration_list',1,'p_declaration_list','c_parser.py',778), + ('declaration_list -> declaration_list declaration','declaration_list',2,'p_declaration_list','c_parser.py',779), + ('declaration_specifiers_no_type -> type_qualifier declaration_specifiers_no_type_opt','declaration_specifiers_no_type',2,'p_declaration_specifiers_no_type_1','c_parser.py',789), + ('declaration_specifiers_no_type -> storage_class_specifier declaration_specifiers_no_type_opt','declaration_specifiers_no_type',2,'p_declaration_specifiers_no_type_2','c_parser.py',794), + ('declaration_specifiers_no_type -> function_specifier declaration_specifiers_no_type_opt','declaration_specifiers_no_type',2,'p_declaration_specifiers_no_type_3','c_parser.py',799), + ('declaration_specifiers_no_type -> atomic_specifier declaration_specifiers_no_type_opt','declaration_specifiers_no_type',2,'p_declaration_specifiers_no_type_4','c_parser.py',806), + ('declaration_specifiers_no_type -> alignment_specifier declaration_specifiers_no_type_opt','declaration_specifiers_no_type',2,'p_declaration_specifiers_no_type_5','c_parser.py',811), + ('declaration_specifiers -> declaration_specifiers type_qualifier','declaration_specifiers',2,'p_declaration_specifiers_1','c_parser.py',816), + ('declaration_specifiers -> declaration_specifiers storage_class_specifier','declaration_specifiers',2,'p_declaration_specifiers_2','c_parser.py',821), + ('declaration_specifiers -> declaration_specifiers function_specifier','declaration_specifiers',2,'p_declaration_specifiers_3','c_parser.py',826), + ('declaration_specifiers -> declaration_specifiers type_specifier_no_typeid','declaration_specifiers',2,'p_declaration_specifiers_4','c_parser.py',831), + ('declaration_specifiers -> type_specifier','declaration_specifiers',1,'p_declaration_specifiers_5','c_parser.py',836), + ('declaration_specifiers -> declaration_specifiers_no_type type_specifier','declaration_specifiers',2,'p_declaration_specifiers_6','c_parser.py',841), + ('declaration_specifiers -> declaration_specifiers alignment_specifier','declaration_specifiers',2,'p_declaration_specifiers_7','c_parser.py',846), + ('storage_class_specifier -> AUTO','storage_class_specifier',1,'p_storage_class_specifier','c_parser.py',851), + ('storage_class_specifier -> REGISTER','storage_class_specifier',1,'p_storage_class_specifier','c_parser.py',852), + ('storage_class_specifier -> STATIC','storage_class_specifier',1,'p_storage_class_specifier','c_parser.py',853), + ('storage_class_specifier -> EXTERN','storage_class_specifier',1,'p_storage_class_specifier','c_parser.py',854), + ('storage_class_specifier -> TYPEDEF','storage_class_specifier',1,'p_storage_class_specifier','c_parser.py',855), + ('storage_class_specifier -> _THREAD_LOCAL','storage_class_specifier',1,'p_storage_class_specifier','c_parser.py',856), + ('function_specifier -> INLINE','function_specifier',1,'p_function_specifier','c_parser.py',861), + ('function_specifier -> _NORETURN','function_specifier',1,'p_function_specifier','c_parser.py',862), + ('type_specifier_no_typeid -> VOID','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',867), + ('type_specifier_no_typeid -> _BOOL','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',868), + ('type_specifier_no_typeid -> CHAR','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',869), + ('type_specifier_no_typeid -> SHORT','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',870), + ('type_specifier_no_typeid -> INT','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',871), + ('type_specifier_no_typeid -> LONG','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',872), + ('type_specifier_no_typeid -> FLOAT','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',873), + ('type_specifier_no_typeid -> DOUBLE','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',874), + ('type_specifier_no_typeid -> _COMPLEX','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',875), + ('type_specifier_no_typeid -> SIGNED','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',876), + ('type_specifier_no_typeid -> UNSIGNED','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',877), + ('type_specifier_no_typeid -> __INT128','type_specifier_no_typeid',1,'p_type_specifier_no_typeid','c_parser.py',878), + ('type_specifier -> typedef_name','type_specifier',1,'p_type_specifier','c_parser.py',883), + ('type_specifier -> enum_specifier','type_specifier',1,'p_type_specifier','c_parser.py',884), + ('type_specifier -> struct_or_union_specifier','type_specifier',1,'p_type_specifier','c_parser.py',885), + ('type_specifier -> type_specifier_no_typeid','type_specifier',1,'p_type_specifier','c_parser.py',886), + ('type_specifier -> atomic_specifier','type_specifier',1,'p_type_specifier','c_parser.py',887), + ('atomic_specifier -> _ATOMIC LPAREN type_name RPAREN','atomic_specifier',4,'p_atomic_specifier','c_parser.py',893), + ('type_qualifier -> CONST','type_qualifier',1,'p_type_qualifier','c_parser.py',900), + ('type_qualifier -> RESTRICT','type_qualifier',1,'p_type_qualifier','c_parser.py',901), + ('type_qualifier -> VOLATILE','type_qualifier',1,'p_type_qualifier','c_parser.py',902), + ('type_qualifier -> _ATOMIC','type_qualifier',1,'p_type_qualifier','c_parser.py',903), + ('init_declarator_list -> init_declarator','init_declarator_list',1,'p_init_declarator_list','c_parser.py',908), + ('init_declarator_list -> init_declarator_list COMMA init_declarator','init_declarator_list',3,'p_init_declarator_list','c_parser.py',909), + ('init_declarator -> declarator','init_declarator',1,'p_init_declarator','c_parser.py',917), + ('init_declarator -> declarator EQUALS initializer','init_declarator',3,'p_init_declarator','c_parser.py',918), + ('id_init_declarator_list -> id_init_declarator','id_init_declarator_list',1,'p_id_init_declarator_list','c_parser.py',923), + ('id_init_declarator_list -> id_init_declarator_list COMMA init_declarator','id_init_declarator_list',3,'p_id_init_declarator_list','c_parser.py',924), + ('id_init_declarator -> id_declarator','id_init_declarator',1,'p_id_init_declarator','c_parser.py',929), + ('id_init_declarator -> id_declarator EQUALS initializer','id_init_declarator',3,'p_id_init_declarator','c_parser.py',930), + ('specifier_qualifier_list -> specifier_qualifier_list type_specifier_no_typeid','specifier_qualifier_list',2,'p_specifier_qualifier_list_1','c_parser.py',937), + ('specifier_qualifier_list -> specifier_qualifier_list type_qualifier','specifier_qualifier_list',2,'p_specifier_qualifier_list_2','c_parser.py',942), + ('specifier_qualifier_list -> type_specifier','specifier_qualifier_list',1,'p_specifier_qualifier_list_3','c_parser.py',947), + ('specifier_qualifier_list -> type_qualifier_list type_specifier','specifier_qualifier_list',2,'p_specifier_qualifier_list_4','c_parser.py',952), + ('specifier_qualifier_list -> alignment_specifier','specifier_qualifier_list',1,'p_specifier_qualifier_list_5','c_parser.py',957), + ('specifier_qualifier_list -> specifier_qualifier_list alignment_specifier','specifier_qualifier_list',2,'p_specifier_qualifier_list_6','c_parser.py',962), + ('struct_or_union_specifier -> struct_or_union ID','struct_or_union_specifier',2,'p_struct_or_union_specifier_1','c_parser.py',970), + ('struct_or_union_specifier -> struct_or_union TYPEID','struct_or_union_specifier',2,'p_struct_or_union_specifier_1','c_parser.py',971), + ('struct_or_union_specifier -> struct_or_union brace_open struct_declaration_list brace_close','struct_or_union_specifier',4,'p_struct_or_union_specifier_2','c_parser.py',981), + ('struct_or_union_specifier -> struct_or_union brace_open brace_close','struct_or_union_specifier',3,'p_struct_or_union_specifier_2','c_parser.py',982), + ('struct_or_union_specifier -> struct_or_union ID brace_open struct_declaration_list brace_close','struct_or_union_specifier',5,'p_struct_or_union_specifier_3','c_parser.py',999), + ('struct_or_union_specifier -> struct_or_union ID brace_open brace_close','struct_or_union_specifier',4,'p_struct_or_union_specifier_3','c_parser.py',1000), + ('struct_or_union_specifier -> struct_or_union TYPEID brace_open struct_declaration_list brace_close','struct_or_union_specifier',5,'p_struct_or_union_specifier_3','c_parser.py',1001), + ('struct_or_union_specifier -> struct_or_union TYPEID brace_open brace_close','struct_or_union_specifier',4,'p_struct_or_union_specifier_3','c_parser.py',1002), + ('struct_or_union -> STRUCT','struct_or_union',1,'p_struct_or_union','c_parser.py',1018), + ('struct_or_union -> UNION','struct_or_union',1,'p_struct_or_union','c_parser.py',1019), + ('struct_declaration_list -> struct_declaration','struct_declaration_list',1,'p_struct_declaration_list','c_parser.py',1026), + ('struct_declaration_list -> struct_declaration_list struct_declaration','struct_declaration_list',2,'p_struct_declaration_list','c_parser.py',1027), + ('struct_declaration -> specifier_qualifier_list struct_declarator_list_opt SEMI','struct_declaration',3,'p_struct_declaration_1','c_parser.py',1035), + ('struct_declaration -> SEMI','struct_declaration',1,'p_struct_declaration_2','c_parser.py',1073), + ('struct_declaration -> pppragma_directive','struct_declaration',1,'p_struct_declaration_3','c_parser.py',1078), + ('struct_declarator_list -> struct_declarator','struct_declarator_list',1,'p_struct_declarator_list','c_parser.py',1083), + ('struct_declarator_list -> struct_declarator_list COMMA struct_declarator','struct_declarator_list',3,'p_struct_declarator_list','c_parser.py',1084), + ('struct_declarator -> declarator','struct_declarator',1,'p_struct_declarator_1','c_parser.py',1092), + ('struct_declarator -> declarator COLON constant_expression','struct_declarator',3,'p_struct_declarator_2','c_parser.py',1097), + ('struct_declarator -> COLON constant_expression','struct_declarator',2,'p_struct_declarator_2','c_parser.py',1098), + ('enum_specifier -> ENUM ID','enum_specifier',2,'p_enum_specifier_1','c_parser.py',1106), + ('enum_specifier -> ENUM TYPEID','enum_specifier',2,'p_enum_specifier_1','c_parser.py',1107), + ('enum_specifier -> ENUM brace_open enumerator_list brace_close','enum_specifier',4,'p_enum_specifier_2','c_parser.py',1112), + ('enum_specifier -> ENUM ID brace_open enumerator_list brace_close','enum_specifier',5,'p_enum_specifier_3','c_parser.py',1117), + ('enum_specifier -> ENUM TYPEID brace_open enumerator_list brace_close','enum_specifier',5,'p_enum_specifier_3','c_parser.py',1118), + ('enumerator_list -> enumerator','enumerator_list',1,'p_enumerator_list','c_parser.py',1123), + ('enumerator_list -> enumerator_list COMMA','enumerator_list',2,'p_enumerator_list','c_parser.py',1124), + ('enumerator_list -> enumerator_list COMMA enumerator','enumerator_list',3,'p_enumerator_list','c_parser.py',1125), + ('alignment_specifier -> _ALIGNAS LPAREN type_name RPAREN','alignment_specifier',4,'p_alignment_specifier','c_parser.py',1136), + ('alignment_specifier -> _ALIGNAS LPAREN constant_expression RPAREN','alignment_specifier',4,'p_alignment_specifier','c_parser.py',1137), + ('enumerator -> ID','enumerator',1,'p_enumerator','c_parser.py',1142), + ('enumerator -> ID EQUALS constant_expression','enumerator',3,'p_enumerator','c_parser.py',1143), + ('declarator -> id_declarator','declarator',1,'p_declarator','c_parser.py',1158), + ('declarator -> typeid_declarator','declarator',1,'p_declarator','c_parser.py',1159), + ('pointer -> TIMES type_qualifier_list_opt','pointer',2,'p_pointer','c_parser.py',1271), + ('pointer -> TIMES type_qualifier_list_opt pointer','pointer',3,'p_pointer','c_parser.py',1272), + ('type_qualifier_list -> type_qualifier','type_qualifier_list',1,'p_type_qualifier_list','c_parser.py',1301), + ('type_qualifier_list -> type_qualifier_list type_qualifier','type_qualifier_list',2,'p_type_qualifier_list','c_parser.py',1302), + ('parameter_type_list -> parameter_list','parameter_type_list',1,'p_parameter_type_list','c_parser.py',1307), + ('parameter_type_list -> parameter_list COMMA ELLIPSIS','parameter_type_list',3,'p_parameter_type_list','c_parser.py',1308), + ('parameter_list -> parameter_declaration','parameter_list',1,'p_parameter_list','c_parser.py',1316), + ('parameter_list -> parameter_list COMMA parameter_declaration','parameter_list',3,'p_parameter_list','c_parser.py',1317), + ('parameter_declaration -> declaration_specifiers id_declarator','parameter_declaration',2,'p_parameter_declaration_1','c_parser.py',1336), + ('parameter_declaration -> declaration_specifiers typeid_noparen_declarator','parameter_declaration',2,'p_parameter_declaration_1','c_parser.py',1337), + ('parameter_declaration -> declaration_specifiers abstract_declarator_opt','parameter_declaration',2,'p_parameter_declaration_2','c_parser.py',1348), + ('identifier_list -> identifier','identifier_list',1,'p_identifier_list','c_parser.py',1380), + ('identifier_list -> identifier_list COMMA identifier','identifier_list',3,'p_identifier_list','c_parser.py',1381), + ('initializer -> assignment_expression','initializer',1,'p_initializer_1','c_parser.py',1390), + ('initializer -> brace_open initializer_list_opt brace_close','initializer',3,'p_initializer_2','c_parser.py',1395), + ('initializer -> brace_open initializer_list COMMA brace_close','initializer',4,'p_initializer_2','c_parser.py',1396), + ('initializer_list -> designation_opt initializer','initializer_list',2,'p_initializer_list','c_parser.py',1404), + ('initializer_list -> initializer_list COMMA designation_opt initializer','initializer_list',4,'p_initializer_list','c_parser.py',1405), + ('designation -> designator_list EQUALS','designation',2,'p_designation','c_parser.py',1416), + ('designator_list -> designator','designator_list',1,'p_designator_list','c_parser.py',1424), + ('designator_list -> designator_list designator','designator_list',2,'p_designator_list','c_parser.py',1425), + ('designator -> LBRACKET constant_expression RBRACKET','designator',3,'p_designator','c_parser.py',1430), + ('designator -> PERIOD identifier','designator',2,'p_designator','c_parser.py',1431), + ('type_name -> specifier_qualifier_list abstract_declarator_opt','type_name',2,'p_type_name','c_parser.py',1436), + ('abstract_declarator -> pointer','abstract_declarator',1,'p_abstract_declarator_1','c_parser.py',1448), + ('abstract_declarator -> pointer direct_abstract_declarator','abstract_declarator',2,'p_abstract_declarator_2','c_parser.py',1456), + ('abstract_declarator -> direct_abstract_declarator','abstract_declarator',1,'p_abstract_declarator_3','c_parser.py',1461), + ('direct_abstract_declarator -> LPAREN abstract_declarator RPAREN','direct_abstract_declarator',3,'p_direct_abstract_declarator_1','c_parser.py',1471), + ('direct_abstract_declarator -> direct_abstract_declarator LBRACKET assignment_expression_opt RBRACKET','direct_abstract_declarator',4,'p_direct_abstract_declarator_2','c_parser.py',1475), + ('direct_abstract_declarator -> LBRACKET type_qualifier_list_opt assignment_expression_opt RBRACKET','direct_abstract_declarator',4,'p_direct_abstract_declarator_3','c_parser.py',1486), + ('direct_abstract_declarator -> direct_abstract_declarator LBRACKET TIMES RBRACKET','direct_abstract_declarator',4,'p_direct_abstract_declarator_4','c_parser.py',1496), + ('direct_abstract_declarator -> LBRACKET TIMES RBRACKET','direct_abstract_declarator',3,'p_direct_abstract_declarator_5','c_parser.py',1507), + ('direct_abstract_declarator -> direct_abstract_declarator LPAREN parameter_type_list_opt RPAREN','direct_abstract_declarator',4,'p_direct_abstract_declarator_6','c_parser.py',1516), + ('direct_abstract_declarator -> LPAREN parameter_type_list_opt RPAREN','direct_abstract_declarator',3,'p_direct_abstract_declarator_7','c_parser.py',1526), + ('block_item -> declaration','block_item',1,'p_block_item','c_parser.py',1537), + ('block_item -> statement','block_item',1,'p_block_item','c_parser.py',1538), + ('block_item_list -> block_item','block_item_list',1,'p_block_item_list','c_parser.py',1545), + ('block_item_list -> block_item_list block_item','block_item_list',2,'p_block_item_list','c_parser.py',1546), + ('compound_statement -> brace_open block_item_list_opt brace_close','compound_statement',3,'p_compound_statement_1','c_parser.py',1552), + ('labeled_statement -> ID COLON pragmacomp_or_statement','labeled_statement',3,'p_labeled_statement_1','c_parser.py',1558), + ('labeled_statement -> CASE constant_expression COLON pragmacomp_or_statement','labeled_statement',4,'p_labeled_statement_2','c_parser.py',1562), + ('labeled_statement -> DEFAULT COLON pragmacomp_or_statement','labeled_statement',3,'p_labeled_statement_3','c_parser.py',1566), + ('selection_statement -> IF LPAREN expression RPAREN pragmacomp_or_statement','selection_statement',5,'p_selection_statement_1','c_parser.py',1570), + ('selection_statement -> IF LPAREN expression RPAREN statement ELSE pragmacomp_or_statement','selection_statement',7,'p_selection_statement_2','c_parser.py',1574), + ('selection_statement -> SWITCH LPAREN expression RPAREN pragmacomp_or_statement','selection_statement',5,'p_selection_statement_3','c_parser.py',1578), + ('iteration_statement -> WHILE LPAREN expression RPAREN pragmacomp_or_statement','iteration_statement',5,'p_iteration_statement_1','c_parser.py',1583), + ('iteration_statement -> DO pragmacomp_or_statement WHILE LPAREN expression RPAREN SEMI','iteration_statement',7,'p_iteration_statement_2','c_parser.py',1587), + ('iteration_statement -> FOR LPAREN expression_opt SEMI expression_opt SEMI expression_opt RPAREN pragmacomp_or_statement','iteration_statement',9,'p_iteration_statement_3','c_parser.py',1591), + ('iteration_statement -> FOR LPAREN declaration expression_opt SEMI expression_opt RPAREN pragmacomp_or_statement','iteration_statement',8,'p_iteration_statement_4','c_parser.py',1595), + ('jump_statement -> GOTO ID SEMI','jump_statement',3,'p_jump_statement_1','c_parser.py',1600), + ('jump_statement -> BREAK SEMI','jump_statement',2,'p_jump_statement_2','c_parser.py',1604), + ('jump_statement -> CONTINUE SEMI','jump_statement',2,'p_jump_statement_3','c_parser.py',1608), + ('jump_statement -> RETURN expression SEMI','jump_statement',3,'p_jump_statement_4','c_parser.py',1612), + ('jump_statement -> RETURN SEMI','jump_statement',2,'p_jump_statement_4','c_parser.py',1613), + ('expression_statement -> expression_opt SEMI','expression_statement',2,'p_expression_statement','c_parser.py',1618), + ('expression -> assignment_expression','expression',1,'p_expression','c_parser.py',1625), + ('expression -> expression COMMA assignment_expression','expression',3,'p_expression','c_parser.py',1626), + ('assignment_expression -> LPAREN compound_statement RPAREN','assignment_expression',3,'p_parenthesized_compound_expression','c_parser.py',1638), + ('typedef_name -> TYPEID','typedef_name',1,'p_typedef_name','c_parser.py',1642), + ('assignment_expression -> conditional_expression','assignment_expression',1,'p_assignment_expression','c_parser.py',1646), + ('assignment_expression -> unary_expression assignment_operator assignment_expression','assignment_expression',3,'p_assignment_expression','c_parser.py',1647), + ('assignment_operator -> EQUALS','assignment_operator',1,'p_assignment_operator','c_parser.py',1660), + ('assignment_operator -> XOREQUAL','assignment_operator',1,'p_assignment_operator','c_parser.py',1661), + ('assignment_operator -> TIMESEQUAL','assignment_operator',1,'p_assignment_operator','c_parser.py',1662), + ('assignment_operator -> DIVEQUAL','assignment_operator',1,'p_assignment_operator','c_parser.py',1663), + ('assignment_operator -> MODEQUAL','assignment_operator',1,'p_assignment_operator','c_parser.py',1664), + ('assignment_operator -> PLUSEQUAL','assignment_operator',1,'p_assignment_operator','c_parser.py',1665), + ('assignment_operator -> MINUSEQUAL','assignment_operator',1,'p_assignment_operator','c_parser.py',1666), + ('assignment_operator -> LSHIFTEQUAL','assignment_operator',1,'p_assignment_operator','c_parser.py',1667), + ('assignment_operator -> RSHIFTEQUAL','assignment_operator',1,'p_assignment_operator','c_parser.py',1668), + ('assignment_operator -> ANDEQUAL','assignment_operator',1,'p_assignment_operator','c_parser.py',1669), + ('assignment_operator -> OREQUAL','assignment_operator',1,'p_assignment_operator','c_parser.py',1670), + ('constant_expression -> conditional_expression','constant_expression',1,'p_constant_expression','c_parser.py',1675), + ('conditional_expression -> binary_expression','conditional_expression',1,'p_conditional_expression','c_parser.py',1679), + ('conditional_expression -> binary_expression CONDOP expression COLON conditional_expression','conditional_expression',5,'p_conditional_expression','c_parser.py',1680), + ('binary_expression -> cast_expression','binary_expression',1,'p_binary_expression','c_parser.py',1688), + ('binary_expression -> binary_expression TIMES binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1689), + ('binary_expression -> binary_expression DIVIDE binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1690), + ('binary_expression -> binary_expression MOD binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1691), + ('binary_expression -> binary_expression PLUS binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1692), + ('binary_expression -> binary_expression MINUS binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1693), + ('binary_expression -> binary_expression RSHIFT binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1694), + ('binary_expression -> binary_expression LSHIFT binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1695), + ('binary_expression -> binary_expression LT binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1696), + ('binary_expression -> binary_expression LE binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1697), + ('binary_expression -> binary_expression GE binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1698), + ('binary_expression -> binary_expression GT binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1699), + ('binary_expression -> binary_expression EQ binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1700), + ('binary_expression -> binary_expression NE binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1701), + ('binary_expression -> binary_expression AND binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1702), + ('binary_expression -> binary_expression OR binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1703), + ('binary_expression -> binary_expression XOR binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1704), + ('binary_expression -> binary_expression LAND binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1705), + ('binary_expression -> binary_expression LOR binary_expression','binary_expression',3,'p_binary_expression','c_parser.py',1706), + ('cast_expression -> unary_expression','cast_expression',1,'p_cast_expression_1','c_parser.py',1714), + ('cast_expression -> LPAREN type_name RPAREN cast_expression','cast_expression',4,'p_cast_expression_2','c_parser.py',1718), + ('unary_expression -> postfix_expression','unary_expression',1,'p_unary_expression_1','c_parser.py',1722), + ('unary_expression -> PLUSPLUS unary_expression','unary_expression',2,'p_unary_expression_2','c_parser.py',1726), + ('unary_expression -> MINUSMINUS unary_expression','unary_expression',2,'p_unary_expression_2','c_parser.py',1727), + ('unary_expression -> unary_operator cast_expression','unary_expression',2,'p_unary_expression_2','c_parser.py',1728), + ('unary_expression -> SIZEOF unary_expression','unary_expression',2,'p_unary_expression_3','c_parser.py',1733), + ('unary_expression -> SIZEOF LPAREN type_name RPAREN','unary_expression',4,'p_unary_expression_3','c_parser.py',1734), + ('unary_expression -> _ALIGNOF LPAREN type_name RPAREN','unary_expression',4,'p_unary_expression_3','c_parser.py',1735), + ('unary_operator -> AND','unary_operator',1,'p_unary_operator','c_parser.py',1743), + ('unary_operator -> TIMES','unary_operator',1,'p_unary_operator','c_parser.py',1744), + ('unary_operator -> PLUS','unary_operator',1,'p_unary_operator','c_parser.py',1745), + ('unary_operator -> MINUS','unary_operator',1,'p_unary_operator','c_parser.py',1746), + ('unary_operator -> NOT','unary_operator',1,'p_unary_operator','c_parser.py',1747), + ('unary_operator -> LNOT','unary_operator',1,'p_unary_operator','c_parser.py',1748), + ('postfix_expression -> primary_expression','postfix_expression',1,'p_postfix_expression_1','c_parser.py',1753), + ('postfix_expression -> postfix_expression LBRACKET expression RBRACKET','postfix_expression',4,'p_postfix_expression_2','c_parser.py',1757), + ('postfix_expression -> postfix_expression LPAREN argument_expression_list RPAREN','postfix_expression',4,'p_postfix_expression_3','c_parser.py',1761), + ('postfix_expression -> postfix_expression LPAREN RPAREN','postfix_expression',3,'p_postfix_expression_3','c_parser.py',1762), + ('postfix_expression -> postfix_expression PERIOD ID','postfix_expression',3,'p_postfix_expression_4','c_parser.py',1767), + ('postfix_expression -> postfix_expression PERIOD TYPEID','postfix_expression',3,'p_postfix_expression_4','c_parser.py',1768), + ('postfix_expression -> postfix_expression ARROW ID','postfix_expression',3,'p_postfix_expression_4','c_parser.py',1769), + ('postfix_expression -> postfix_expression ARROW TYPEID','postfix_expression',3,'p_postfix_expression_4','c_parser.py',1770), + ('postfix_expression -> postfix_expression PLUSPLUS','postfix_expression',2,'p_postfix_expression_5','c_parser.py',1776), + ('postfix_expression -> postfix_expression MINUSMINUS','postfix_expression',2,'p_postfix_expression_5','c_parser.py',1777), + ('postfix_expression -> LPAREN type_name RPAREN brace_open initializer_list brace_close','postfix_expression',6,'p_postfix_expression_6','c_parser.py',1782), + ('postfix_expression -> LPAREN type_name RPAREN brace_open initializer_list COMMA brace_close','postfix_expression',7,'p_postfix_expression_6','c_parser.py',1783), + ('primary_expression -> identifier','primary_expression',1,'p_primary_expression_1','c_parser.py',1788), + ('primary_expression -> constant','primary_expression',1,'p_primary_expression_2','c_parser.py',1792), + ('primary_expression -> unified_string_literal','primary_expression',1,'p_primary_expression_3','c_parser.py',1796), + ('primary_expression -> unified_wstring_literal','primary_expression',1,'p_primary_expression_3','c_parser.py',1797), + ('primary_expression -> LPAREN expression RPAREN','primary_expression',3,'p_primary_expression_4','c_parser.py',1802), + ('primary_expression -> OFFSETOF LPAREN type_name COMMA offsetof_member_designator RPAREN','primary_expression',6,'p_primary_expression_5','c_parser.py',1806), + ('offsetof_member_designator -> identifier','offsetof_member_designator',1,'p_offsetof_member_designator','c_parser.py',1814), + ('offsetof_member_designator -> offsetof_member_designator PERIOD identifier','offsetof_member_designator',3,'p_offsetof_member_designator','c_parser.py',1815), + ('offsetof_member_designator -> offsetof_member_designator LBRACKET expression RBRACKET','offsetof_member_designator',4,'p_offsetof_member_designator','c_parser.py',1816), + ('argument_expression_list -> assignment_expression','argument_expression_list',1,'p_argument_expression_list','c_parser.py',1828), + ('argument_expression_list -> argument_expression_list COMMA assignment_expression','argument_expression_list',3,'p_argument_expression_list','c_parser.py',1829), + ('identifier -> ID','identifier',1,'p_identifier','c_parser.py',1838), + ('constant -> INT_CONST_DEC','constant',1,'p_constant_1','c_parser.py',1842), + ('constant -> INT_CONST_OCT','constant',1,'p_constant_1','c_parser.py',1843), + ('constant -> INT_CONST_HEX','constant',1,'p_constant_1','c_parser.py',1844), + ('constant -> INT_CONST_BIN','constant',1,'p_constant_1','c_parser.py',1845), + ('constant -> INT_CONST_CHAR','constant',1,'p_constant_1','c_parser.py',1846), + ('constant -> FLOAT_CONST','constant',1,'p_constant_2','c_parser.py',1865), + ('constant -> HEX_FLOAT_CONST','constant',1,'p_constant_2','c_parser.py',1866), + ('constant -> CHAR_CONST','constant',1,'p_constant_3','c_parser.py',1882), + ('constant -> WCHAR_CONST','constant',1,'p_constant_3','c_parser.py',1883), + ('constant -> U8CHAR_CONST','constant',1,'p_constant_3','c_parser.py',1884), + ('constant -> U16CHAR_CONST','constant',1,'p_constant_3','c_parser.py',1885), + ('constant -> U32CHAR_CONST','constant',1,'p_constant_3','c_parser.py',1886), + ('unified_string_literal -> STRING_LITERAL','unified_string_literal',1,'p_unified_string_literal','c_parser.py',1897), + ('unified_string_literal -> unified_string_literal STRING_LITERAL','unified_string_literal',2,'p_unified_string_literal','c_parser.py',1898), + ('unified_wstring_literal -> WSTRING_LITERAL','unified_wstring_literal',1,'p_unified_wstring_literal','c_parser.py',1908), + ('unified_wstring_literal -> U8STRING_LITERAL','unified_wstring_literal',1,'p_unified_wstring_literal','c_parser.py',1909), + ('unified_wstring_literal -> U16STRING_LITERAL','unified_wstring_literal',1,'p_unified_wstring_literal','c_parser.py',1910), + ('unified_wstring_literal -> U32STRING_LITERAL','unified_wstring_literal',1,'p_unified_wstring_literal','c_parser.py',1911), + ('unified_wstring_literal -> unified_wstring_literal WSTRING_LITERAL','unified_wstring_literal',2,'p_unified_wstring_literal','c_parser.py',1912), + ('unified_wstring_literal -> unified_wstring_literal U8STRING_LITERAL','unified_wstring_literal',2,'p_unified_wstring_literal','c_parser.py',1913), + ('unified_wstring_literal -> unified_wstring_literal U16STRING_LITERAL','unified_wstring_literal',2,'p_unified_wstring_literal','c_parser.py',1914), + ('unified_wstring_literal -> unified_wstring_literal U32STRING_LITERAL','unified_wstring_literal',2,'p_unified_wstring_literal','c_parser.py',1915), + ('brace_open -> LBRACE','brace_open',1,'p_brace_open','c_parser.py',1925), + ('brace_close -> RBRACE','brace_close',1,'p_brace_close','c_parser.py',1931), + ('empty -> ','empty',0,'p_empty','c_parser.py',1937), +] diff --git a/templates/skills/file_manager/main.py b/templates/skills/file_manager/main.py index 653c94ce..d5be988f 100644 --- a/templates/skills/file_manager/main.py +++ b/templates/skills/file_manager/main.py @@ -6,10 +6,11 @@ from skills.skill_base import Skill from services.file import get_writable_dir from showinfm import show_in_file_manager +from pdfminer.high_level import extract_text if TYPE_CHECKING: from wingmen.open_ai_wingman import OpenAiWingman -DEFAULT_MAX_TEXT_SIZE = 15000 +DEFAULT_MAX_TEXT_SIZE = 24000 SUPPORTED_FILE_EXTENSIONS = [ "adoc", "asc", @@ -46,6 +47,7 @@ "m3u", "map", "md", + "pdf", "pyd", "plist", "pl", @@ -63,11 +65,13 @@ "sql", "svg", "ts", + "tscn", "tcl", "tex", "tmpl", "toml", "tpl", + "tres", "tsv", "txt", "vtt", @@ -129,6 +133,10 @@ def get_tools(self) -> list[tuple[str, dict]]: "type": "string", "description": "The directory from where the file should be loaded. Defaults to the configured directory.", }, + "pdf_page_number_to_load": { + "type": "number", + "description": "The page number of a pdf to load, if expressly specified by the user.", + }, }, "required": ["file_name"], }, @@ -237,25 +245,31 @@ async def execute_tool( ) file_name = parameters.get("file_name") directory = parameters.get("directory_path", self.default_directory) + pdf_page_number = parameters.get("pdf_page_number_to_load") if directory == "": directory = self.default_directory if not file_name or file_name == "": function_response = "File name not provided." else: file_extension = file_name.split(".")[-1] - if file_extension not in self.allowed_file_extensions: + if file_extension.lower() not in self.allowed_file_extensions: function_response = f"Unsupported file extension: {file_extension}" else: file_path = os.path.join(directory, file_name) try: - with open(file_path, "r", encoding="utf-8") as file: - file_content = file.read() - if len(file_content) > self.max_text_size: - function_response = ( - "File content exceeds the maximum allowed size." - ) - else: - function_response = f"File content loaded from {file_path}:\n{file_content}" + # if PDF, use pdfminer.six's extract text to read (optionally passing the specific page to read - zero-indexed so subtract 1), otherwise open and parse file + file_content = "" + if file_extension.lower() == "pdf": + file_content = extract_text(file_path, page_numbers=[pdf_page_number-1]) if pdf_page_number else extract_text(file_path) + else: + with open(file_path, "r", encoding="utf-8") as file: + file_content = file.read() + if len(file_content) > self.max_text_size: + function_response = ( + "File content exceeds the maximum allowed size." + ) + else: + function_response = f"File content loaded from {file_path}:\n{file_content}" except FileNotFoundError: function_response = ( f"File '{file_name}' not found in '{directory}'." @@ -282,12 +296,12 @@ async def execute_tool( function_response = "File name or text content not provided." else: file_extension = file_name.split(".")[-1] - if file_extension not in self.allowed_file_extensions: + if file_extension.lower() not in self.allowed_file_extensions: file_name += f".{self.default_file_extension}" if len(text_content) > self.max_text_size: function_response = "Text content exceeds the maximum allowed size." else: - if file_extension == "json": + if file_extension.lower() == "json": try: json_content = json.loads(text_content) text_content = json.dumps(json_content, indent=4) diff --git a/templates/skills/file_manager/requirements.txt b/templates/skills/file_manager/requirements.txt new file mode 100644 index 00000000..7f9421c5 Binary files /dev/null and b/templates/skills/file_manager/requirements.txt differ diff --git a/templates/skills/image_generation/default_config.yaml b/templates/skills/image_generation/default_config.yaml new file mode 100644 index 00000000..36aa62ac --- /dev/null +++ b/templates/skills/image_generation/default_config.yaml @@ -0,0 +1,18 @@ +name: ImageGeneration +module: skills.image_generation.main +category: general +description: + en: Use Wingamn AI to generate images based on your input. It uses DALL-E 3. + de: Verwende Wingamn AI, um Bilder basierend auf deinen Eingaben zu generieren. Es verwendet DALL-E 3. +# hint: +# en: +# de: +examples: + - question: + en: Generate an image of a cat. + de: Generiere ein Bild einer Katze. + answer: + en: Here is an image of a cat. + de: Hier ist ein Bild einer Katze. +prompt: | + You can also generate images. diff --git a/templates/skills/image_generation/logo.png b/templates/skills/image_generation/logo.png new file mode 100644 index 00000000..15ef76b5 Binary files /dev/null and b/templates/skills/image_generation/logo.png differ diff --git a/templates/skills/image_generation/main.py b/templates/skills/image_generation/main.py new file mode 100644 index 00000000..13aeb355 --- /dev/null +++ b/templates/skills/image_generation/main.py @@ -0,0 +1,71 @@ +from typing import TYPE_CHECKING +from api.enums import LogSource, LogType +from api.interface import SettingsConfig, SkillConfig, WingmanInitializationError +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class ImageGeneration(Skill): + + def __init__( + self, + config: SkillConfig, + settings: SettingsConfig, + wingman: "OpenAiWingman", + ) -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + + return errors + + async def execute_tool( + self, tool_name: str, parameters: dict[str, any] + ) -> tuple[str, str]: + instant_response = "" + function_response = "I can't generate an image, sorry. Try another provider." + + if tool_name == "generate_image": + prompt = parameters["prompt"] + if self.settings.debug_mode: + await self.printr.print_async(f"Generate image with prompt: {prompt}.", color=LogType.INFO) + image = await self.wingman.generate_image(prompt) + await self.printr.print_async( + "", + color=LogType.INFO, + source=LogSource.WINGMAN, + source_name=self.wingman.name, + skill_name=self.name, + additional_data={"image_url": image}, + ) + function_response = "Here is an image based on your prompt." + return function_response, instant_response + + async def is_waiting_response_needed(self, tool_name: str) -> bool: + return True + + def get_tools(self) -> list[tuple[str, dict]]: + tools = [ + ( + "generate_image", + { + "type": "function", + "function": { + "name": "generate_image", + "description": "Generate an image based on the users prompt.", + "parameters": { + "type": "object", + "properties": { + "prompt": {"type": "string"}, + }, + "required": ["prompt"], + }, + }, + }, + ), + ] + + return tools diff --git a/templates/skills/msfs2020_control/default_config.yaml b/templates/skills/msfs2020_control/default_config.yaml new file mode 100644 index 00000000..b153588e --- /dev/null +++ b/templates/skills/msfs2020_control/default_config.yaml @@ -0,0 +1,632 @@ +name: Msfs2020Control +module: skills.msfs2020_control.main +category: flight_simulator +description: + en: Control and retrieve data from MSFS2020 dynamically using SimConnect. + de: Steuern und Abrufen von Daten aus MSFS2020 dynamisch mit SimConnect. +hint: + en: This skill can retrieve data and perform actions in the MSFS2020 simulator based on dynamic inputs. + de: Dieser Skill kann Daten abrufen und Aktionen im MSFS2020-Simulator basierend auf dynamischen Eingaben ausführen. +examples: + - question: + en: What is my current altitude? + de: Wie hoch bin ich gerade? + answer: + en: (retrieves 'PLANE_ALTITUDE' from MSFS2020) + de: (ruft 'PLANE_ALTITUDE' aus MSFS2020 ab) + - question: + en: Start the engine. + de: Starte den Motor. + answer: + en: (executes the engine start sequence in MSFS2020) + de: (führt die Motorstartsequenz in MSFS2020 aus) +prompt: | + You can interact with the MSFS2020 simulator using the Python SimConnect API. + To retrieve data, use 'get_data_from_sim'. To set data or perform actions, use 'set_data_or_perform_action_in_sim'. + Example: + User: Set throttle to full! + Response: (uses the set_data_or_perform_action_in_sim function with THROTTLE_FULL as the action) + User: Let's raise the flaps. + Response: (uses the set_data_or_perform_action_in_sim function with FLAPS_UP as the action) + User: How far are we above ground right now? + Response: (uses the get_data_from_sim function with PLANE_ALT_ABOVE_GROUND as the data point) + Here are some dictionaries with common action names and their explanations for use with the set_data_or_perform_action_in_sim function: + Engine Events: + { + "THROTTLE_FULL": "Set throttles max", + "THROTTLE_INCR": "Increment throttles", + "THROTTLE_INCR_SMALL": "Increment throttles small", + "THROTTLE_DECR": "Decrement throttles", + "THROTTLE_DECR_SMALL": "Decrease throttles small", + "THROTTLE_CUT": "Set throttles to idle", + "INCREASE_THROTTLE": "Increment throttles", + "DECREASE_THROTTLE": "Decrement throttles", + "THROTTLE_SET": "Set throttles exactly (0- 16383),", + "THROTTLE1_FULL": "Set throttle 1 max", + "THROTTLE1_INCR": "Increment throttle 1", + "THROTTLE1_INCR_SMALL": "Increment throttle 1 small", + "THROTTLE1_DECR": "Decrement throttle 1", + "THROTTLE1_CUT": "Set throttle 1 to idle", + "THROTTLE2_FULL": "Set throttle 2 max", + "THROTTLE2_INCR": "Increment throttle 2", + "THROTTLE2_INCR_SMALL": "Increment throttle 2 small", + "THROTTLE2_DECR": "Decrement throttle 2", + "THROTTLE2_CUT": "Set throttle 2 to idle", + "THROTTLE3_FULL": "Set throttle 3 max", + "THROTTLE3_INCR": "Increment throttle 3", + "THROTTLE3_INCR_SMALL": "Increment throttle 3 small", + "THROTTLE3_DECR": "Decrement throttle 3", + "THROTTLE3_CUT": "Set throttle 3 to idle", + "THROTTLE4_FULL": "Set throttle 1 max", + "THROTTLE4_INCR": "Increment throttle 4", + "THROTTLE4_INCR_SMALL": "Increment throttle 4 small", + "THROTTLE4_DECR": "Decrement throttle 4", + "THROTTLE4_CUT": "Set throttle 4 to idle", + "THROTTLE_10": "Set throttles to 10%", + "THROTTLE_20": "Set throttles to 20%", + "THROTTLE_30": "Set throttles to 30%", + "THROTTLE_40": "Set throttles to 40%", + "THROTTLE_50": "Set throttles to 50%", + "THROTTLE_60": "Set throttles to 60%", + "THROTTLE_70": "Set throttles to 70%", + "THROTTLE_80": "Set throttles to 80%", + "THROTTLE_90": "Set throttles to 90%", + "THROTTLE1_DECR_SMALL": "Decrease throttle 1 small", + "THROTTLE2_DECR_SMALL": "Decrease throttle 2 small", + "THROTTLE3_DECR_SMALL": "Decrease throttle 3 small", + "THROTTLE4_DECR_SMALL": "Decrease throttle 4 small", + "PROP_PITCH_DECR_SMALL": "Decrease prop levers small", + "PROP_PITCH1_DECR_SMALL": "Decrease prop lever 1 small", + "PROP_PITCH2_DECR_SMALL": "Decrease prop lever 2 small", + "PROP_PITCH3_DECR_SMALL": "Decrease prop lever 3 small", + "PROP_PITCH4_DECR_SMALL": "Decrease prop lever 4 small", + "MIXTURE1_RICH": "Set mixture lever 1 to max rich", + "MIXTURE1_INCR": "Increment mixture lever 1", + "MIXTURE1_INCR_SMALL": "Increment mixture lever 1 small", + "MIXTURE1_DECR": "Decrement mixture lever 1", + "MIXTURE1_LEAN": "Set mixture lever 1 to max lean", + "MIXTURE2_RICH": "Set mixture lever 2 to max rich", + "MIXTURE2_INCR": "Increment mixture lever 2", + "MIXTURE2_INCR_SMALL": "Increment mixture lever 2 small", + "MIXTURE2_DECR": "Decrement mixture lever 2", + "MIXTURE2_LEAN": "Set mixture lever 2 to max lean", + "MIXTURE3_RICH": "Set mixture lever 3 to max rich", + "MIXTURE3_INCR": "Increment mixture lever 3", + "MIXTURE3_INCR_SMALL": "Increment mixture lever 3 small", + "MIXTURE3_DECR": "Decrement mixture lever 3", + "MIXTURE3_LEAN": "Set mixture lever 3 to max lean", + "MIXTURE4_RICH": "Set mixture lever 4 to max rich", + "MIXTURE4_INCR": "Increment mixture lever 4", + "MIXTURE4_INCR_SMALL": "Increment mixture lever 4 small", + "MIXTURE4_DECR": "Decrement mixture lever 4", + "MIXTURE4_LEAN": "Set mixture lever 4 to max lean", + "MIXTURE_RICH": "Set mixture levers to max rich", + "MIXTURE_INCR": "Increment mixture levers", + "MIXTURE_INCR_SMALL": "Increment mixture levers small", + "MIXTURE_DECR": "Decrement mixture levers", + "MIXTURE_LEAN": "Set mixture levers to max lean", + "MIXTURE1_SET": "Set mixture lever 1 exact value (0 to 16383),", + "MIXTURE2_SET": "Set mixture lever 2 exact value (0 to 16383),", + "MIXTURE3_SET": "Set mixture lever 3 exact value (0 to 16383),", + "MIXTURE4_SET": "Set mixture lever 4 exact value (0 to 16383),", + "MIXTURE_SET_BEST": "Set mixture levers to current best power setting", + "MIXTURE_DECR_SMALL": "Decrement mixture levers small", + "MIXTURE1_DECR_SMALL": "Decrement mixture lever 1 small", + "MIXTURE2_DECR_SMALL": "Decrement mixture lever 4 small", + "MIXTURE3_DECR_SMALL": "Decrement mixture lever 4 small", + "MIXTURE4_DECR_SMALL": "Decrement mixture lever 4 small", + "PROP_PITCH_SET": "Set prop pitch levers (0 to 16383),", + "PROP_PITCH_LO": "Set prop pitch levers max (lo pitch),", + "PROP_PITCH_INCR": "Increment prop pitch levers", + "PROP_PITCH_INCR_SMALL": "Increment prop pitch levers small", + "PROP_PITCH_DECR": "Decrement prop pitch levers", + "PROP_PITCH_HI": "Set prop pitch levers min (hi pitch),", + "PROP_PITCH1_LO": "Set prop pitch lever 1 max (lo pitch),", + "PROP_PITCH1_INCR": "Increment prop pitch lever 1", + "PROP_PITCH1_INCR_SMALL": "Increment prop pitch lever 1 small", + "PROP_PITCH1_DECR": "Decrement prop pitch lever 1", + "PROP_PITCH1_HI": "Set prop pitch lever 1 min (hi pitch),", + "PROP_PITCH2_LO": "Set prop pitch lever 2 max (lo pitch),", + "PROP_PITCH2_INCR": "Increment prop pitch lever 2", + "PROP_PITCH2_INCR_SMALL": "Increment prop pitch lever 2 small", + "PROP_PITCH2_DECR": "Decrement prop pitch lever 2", + "PROP_PITCH2_HI": "Set prop pitch lever 2 min (hi pitch),", + "PROP_PITCH3_LO": "Set prop pitch lever 3 max (lo pitch),", + "PROP_PITCH3_INCR": "Increment prop pitch lever 3", + "PROP_PITCH3_INCR_SMALL": "Increment prop pitch lever 3 small", + "PROP_PITCH3_DECR": "Decrement prop pitch lever 3", + "PROP_PITCH3_HI": "Set prop pitch lever 3 min (hi pitch),", + "PROP_PITCH4_LO": "Set prop pitch lever 4 max (lo pitch),", + "PROP_PITCH4_INCR": "Increment prop pitch lever 4", + "PROP_PITCH4_INCR_SMALL": "Increment prop pitch lever 4 small", + "PROP_PITCH4_DECR": "Decrement prop pitch lever 4", + "PROP_PITCH4_HI": "Set prop pitch lever 4 min (hi pitch),", + "JET_STARTER": "Selects jet engine starter (for +/- sequence),", + "MAGNETO_SET": "Sets magnetos (0,1),", + "TOGGLE_STARTER1": "Toggle starter 1", + "TOGGLE_STARTER2": "Toggle starter 2", + "TOGGLE_STARTER3": "Toggle starter 3", + "TOGGLE_STARTER4": "Toggle starter 4", + "TOGGLE_ALL_STARTERS": "Toggle starters", + "ENGINE_AUTO_START": "Triggers auto-start", + "ENGINE_AUTO_SHUTDOWN": "Triggers auto-shutdown", + "MAGNETO": "Selects magnetos (for +/- sequence),", + "MAGNETO_DECR": "Decrease magneto switches positions", + "MAGNETO_INCR": "Increase magneto switches positions", + "MAGNETO1_OFF": "Set engine 1 magnetos off", + "MAGNETO1_RIGHT": "Toggle engine 1 right magneto", + "MAGNETO1_LEFT": "Toggle engine 1 left magneto", + "MAGNETO1_BOTH": "Set engine 1 magnetos on", + "MAGNETO1_START": "Set engine 1 magnetos on and toggle starter", + "MAGNETO2_OFF": "Set engine 2 magnetos off", + "MAGNETO2_RIGHT": "Toggle engine 2 right magneto", + "MAGNETO2_LEFT": "Toggle engine 2 left magneto", + "MAGNETO2_BOTH": "Set engine 2 magnetos on", + "MAGNETO2_START": "Set engine 2 magnetos on and toggle starter", + "MAGNETO3_OFF": "Set engine 3 magnetos off", + "MAGNETO3_RIGHT": "Toggle engine 3 right magneto", + "MAGNETO3_LEFT": "Toggle engine 3 left magneto", + "MAGNETO3_BOTH": "Set engine 3 magnetos on", + "MAGNETO3_START": "Set engine 3 magnetos on and toggle starter", + "MAGNETO4_OFF": "Set engine 4 magnetos off", + "MAGNETO4_RIGHT": "Toggle engine 4 right magneto", + "MAGNETO4_LEFT": "Toggle engine 4 left magneto", + "MAGNETO4_BOTH": "Set engine 4 magnetos on", + "MAGNETO4_START": "Set engine 4 magnetos on and toggle starter", + "MAGNETO_OFF": "Set engine magnetos off", + "MAGNETO_RIGHT": "Set engine right magnetos on", + "MAGNETO_LEFT": "Set engine left magnetos on", + "MAGNETO_BOTH": "Set engine magnetos on", + "MAGNETO_START": "Set engine magnetos on and toggle starters", + "MAGNETO1_DECR": "Decrease engine 1 magneto switch position", + "MAGNETO1_INCR": "Increase engine 1 magneto switch position", + "MAGNETO2_DECR": "Decrease engine 2 magneto switch position", + "MAGNETO2_INCR": "Increase engine 2 magneto switch position", + "MAGNETO3_DECR": "Decrease engine 3 magneto switch position", + "MAGNETO3_INCR": "Increase engine 3 magneto switch position", + "MAGNETO4_DECR": "Decrease engine 4 magneto switch position", + "MAGNETO4_INCR": "Increase engine 4 magneto switch position", + "MAGNETO1_SET": "Set engine 1 magneto switch", + "MAGNETO2_SET": "Set engine 2 magneto switch", + "MAGNETO3_SET": "Set engine 3 magneto switch", + "MAGNETO4_SET": "Set engine 4 magneto switch", + "ANTI_ICE_ON": "Sets anti-ice switches on", + "ANTI_ICE_OFF": "Sets anti-ice switches off", + "ANTI_ICE_TOGGLE_ENG1": "Toggle engine 1 anti-ice switch", + "ANTI_ICE_TOGGLE_ENG2": "Toggle engine 2 anti-ice switch", + "ANTI_ICE_TOGGLE_ENG3": "Toggle engine 3 anti-ice switch", + "ANTI_ICE_TOGGLE_ENG4": "Toggle engine 4 anti-ice switch",, + "TOGGLE_FUEL_VALVE_ALL": "Toggle engine fuel valves", + "TOGGLE_FUEL_VALVE_ENG1": "Toggle engine 1 fuel valve", + "TOGGLE_FUEL_VALVE_ENG2": "Toggle engine 2 fuel valve", + "TOGGLE_FUEL_VALVE_ENG3": "Toggle engine 3 fuel valve", + "TOGGLE_FUEL_VALVE_ENG4": "Toggle engine 4 fuel valve", + "INC_COWL_FLAPS": "Increment cowl flap levers", + "DEC_COWL_FLAPS": "Decrement cowl flap levers", + "INC_COWL_FLAPS1": "Increment engine 1 cowl flap lever", + "DEC_COWL_FLAPS1": "Decrement engine 1 cowl flap lever", + "INC_COWL_FLAPS2": "Increment engine 2 cowl flap lever", + "DEC_COWL_FLAPS2": "Decrement engine 2 cowl flap lever", + "INC_COWL_FLAPS3": "Increment engine 3 cowl flap lever", + "DEC_COWL_FLAPS3": "Decrement engine 3 cowl flap lever", + "INC_COWL_FLAPS4": "Increment engine 4 cowl flap lever", + "DEC_COWL_FLAPS4": "Decrement engine 4 cowl flap lever", + "FUEL_PUMP": "Toggle electric fuel pumps", + "TOGGLE_ELECT_FUEL_PUMP": "Toggle electric fuel pumps", + "TOGGLE_ELECT_FUEL_PUMP1": "Toggle engine 1 electric fuel pump", + "TOGGLE_ELECT_FUEL_PUMP2": "Toggle engine 2 electric fuel pump", + "TOGGLE_ELECT_FUEL_PUMP3": "Toggle engine 3 electric fuel pump", + "TOGGLE_ELECT_FUEL_PUMP4": "Toggle engine 4 electric fuel pump", + "ENGINE_PRIMER": "Trigger engine primers", + "TOGGLE_PRIMER": "Trigger engine primers", + "TOGGLE_PRIMER1": "Trigger engine 1 primer", + "TOGGLE_PRIMER2": "Trigger engine 2 primer", + "TOGGLE_PRIMER3": "Trigger engine 3 primer", + "TOGGLE_PRIMER4": "Trigger engine 4 primer", + "TOGGLE_FEATHER_SWITCHES": "Trigger propeller switches", + "TOGGLE_FEATHER_SWITCH_1": "Trigger propeller 1 switch", + "TOGGLE_FEATHER_SWITCH_2": "Trigger propeller 2 switch", + "TOGGLE_FEATHER_SWITCH_3": "Trigger propeller 3 switch", + "TOGGLE_FEATHER_SWITCH_4": "Trigger propeller 4 switch", + "TOGGLE_PROPELLER_SYNC": "Turns propeller synchronization switch on", + "TOGGLE_AUTOFEATHER_ARM": "Turns auto-feather arming switch on.", + "TOGGLE_AFTERBURNER": "Toggles afterburners", + "ENGINE": "Sets engines for 1,2,3,4 selection (to be followed by SELECT_n)," + } + Flight Controls Events: + { + "FLAPS_UP": "Sets flap handle to full retract position", + "FLAPS_1": "Sets flap handle to first extension position", + "FLAPS_2": "Sets flap handle to second extension position", + "FLAPS_3": "Sets flap handle to third extension position", + "FLAPS_DOWN": "Sets flap handle to full extension position", + "ELEV_TRIM_DN": "Increments elevator trim down", + "ELEV_DOWN": "Increments elevator down", + "AILERONS_LEFT": "Increments ailerons left", + "CENTER_AILER_RUDDER": "Centers aileron and rudder positions", + "AILERONS_RIGHT": "Increments ailerons right", + "ELEV_TRIM_UP": "Increment elevator trim up", + "ELEV_UP": "Increments elevator up", + "RUDDER_LEFT": "Increments rudder left", + "RUDDER_CENTER": "Centers rudder position", + "RUDDER_RIGHT": "Increments rudder right", + "ELEVATOR_SET": "Sets elevator position (-16383 - +16383),", + "AILERON_SET": "Sets aileron position (-16383 - +16383),", + "RUDDER_SET": "Sets rudder position (-16383 - +16383),", + "FLAPS_INCR": "Increments flap handle position", + "FLAPS_DECR": "Decrements flap handle position", + "SPOILERS_ON": "Sets spoiler handle to full extend position", + "SPOILERS_OFF": "Sets spoiler handle to full retract position", + "SPOILERS_ARM_ON": "Sets auto-spoiler arming on", + "SPOILERS_ARM_OFF": "Sets auto-spoiler arming off", + "AILERON_TRIM_LEFT": "Increments aileron trim left", + "AILERON_TRIM_RIGHT": "Increments aileron trim right", + "RUDDER_TRIM_LEFT": "Increments rudder trim left", + "RUDDER_TRIM_RIGHT": "Increments aileron trim right", + "ELEVATOR_TRIM_SET": "Sets elevator trim position (0 to 16383),", + } + Autopilot and Avionics Events: + { + "AP_MASTER": "Toggles AP on/off", + "AUTOPILOT_OFF": "Turns AP off", + "AUTOPILOT_ON": "Turns AP on", + "YAW_DAMPER_TOGGLE": "Toggles yaw damper on/off", + "AP_PANEL_HEADING_HOLD": "Toggles heading hold mode on/off", + "AP_PANEL_ALTITUDE_HOLD": "Toggles altitude hold mode on/off", + "AP_ATT_HOLD_ON": "Turns on AP wing leveler and pitch hold mode", + "AP_LOC_HOLD_ON": "Turns AP localizer hold on/armed and glide-slope hold mode off", + "AP_APR_HOLD_ON": "Turns both AP localizer and glide-slope modes on/armed", + "AP_HDG_HOLD_ON": "Turns heading hold mode on", + "AP_ALT_HOLD_ON": "Turns altitude hold mode on", + "AP_WING_LEVELER_ON": "Turns wing leveler mode on", + "AP_BC_HOLD_ON": "Turns localizer back course hold mode on/armed", + "AP_NAV1_HOLD_ON": "Turns lateral hold mode on", + "AP_ATT_HOLD_OFF": "Turns off attitude hold mode", + "AP_LOC_HOLD_OFF": "Turns off localizer hold mode", + "AP_APR_HOLD_OFF": "Turns off approach hold mode", + "AP_HDG_HOLD_OFF": "Turns off heading hold mode", + "AP_ALT_HOLD_OFF": "Turns off altitude hold mode", + "AP_WING_LEVELER_OFF": "Turns off wing leveler mode", + "AP_BC_HOLD_OFF": "Turns off backcourse mode for localizer hold", + "AP_NAV1_HOLD_OFF": "Turns off nav hold mode", + "AP_AIRSPEED_HOLD": "Toggles airspeed hold mode", + "AUTO_THROTTLE_ARM": "Toggles autothrottle arming mode", + "AUTO_THROTTLE_TO_GA": "Toggles Takeoff/Go Around mode", + "HEADING_BUG_INC": "Increments heading hold reference bug", + "HEADING_BUG_DEC": "Decrements heading hold reference bug", + "HEADING_BUG_SET": "Set heading hold reference bug (degrees),", + "AP_PANEL_SPEED_HOLD": "Toggles airspeed hold mode", + "AP_ALT_VAR_INC": "Increments reference altitude", + "AP_ALT_VAR_DEC": "Decrements reference altitude", + "AP_VS_VAR_INC": "Increments vertical speed reference", + "AP_VS_VAR_DEC": "Decrements vertical speed reference", + "AP_SPD_VAR_INC": "Increments airspeed hold reference", + "AP_SPD_VAR_DEC": "Decrements airspeed hold reference", + "AP_PANEL_MACH_HOLD": "Toggles mach hold", + "AP_MACH_VAR_INC": "Increments reference mach", + "AP_MACH_VAR_DEC": "Decrements reference mach", + "AP_MACH_HOLD": "Toggles mach hold", + "AP_ALT_VAR_SET_METRIC": "Sets reference altitude in meters", + "AP_VS_VAR_SET_ENGLISH": "Sets reference vertical speed in feet per minute", + "AP_SPD_VAR_SET": "Sets airspeed reference in knots", + "AP_MACH_VAR_SET": "Sets mach reference", + "YAW_DAMPER_ON": "Turns yaw damper on", + "YAW_DAMPER_OFF": "Turns yaw damper off", + "YAW_DAMPER_SET": "Sets yaw damper on/off (1,0),", + "AP_AIRSPEED_ON": "Turns airspeed hold on", + "AP_AIRSPEED_OFF": "Turns airspeed hold off", + "AP_MACH_ON": "Turns mach hold on", + "AP_MACH_OFF": "Turns mach hold off", + "AP_MACH_SET": "Sets mach hold on/off (1,0),", + "AP_PANEL_ALTITUDE_ON": "Turns altitude hold mode on (without capturing current altitude),", + "AP_PANEL_ALTITUDE_OFF": "Turns altitude hold mode off", + "AP_PANEL_HEADING_ON": "Turns heading mode on (without capturing current heading),", + "AP_PANEL_HEADING_OFF": "Turns heading mode off", + "AP_PANEL_MACH_ON": "Turns on mach hold", + "AP_PANEL_MACH_OFF": "Turns off mach hold", + "AP_PANEL_SPEED_ON": "Turns on speed hold mode", + "AP_PANEL_SPEED_OFF": "Turns off speed hold mode", + "AP_ALT_VAR_SET_ENGLISH": "Sets altitude reference in feet", + "AP_VS_VAR_SET_METRIC": "Sets vertical speed reference in meters per minute", + "TOGGLE_FLIGHT_DIRECTOR": "Toggles flight director on/off", + "SYNC_FLIGHT_DIRECTOR_PITCH": "Synchronizes flight director pitch with current aircraft pitch", + "INCREASE_AUTOBRAKE_CONTROL": "Increments autobrake level", + "DECREASE_AUTOBRAKE_CONTROL": "Decrements autobrake level", + "AP_PANEL_SPEED_HOLD_TOGGLE": "Turns airspeed hold mode on with current airspeed", + "AP_PANEL_MACH_HOLD_TOGGLE": "Sets mach hold reference to current mach", + "AP_NAV_SELECT_SET": "Sets the nav (1 or 2), which is used by the Nav hold modes", + "HEADING_BUG_SELECT": "Selects the heading bug for use with +/-", + "ALTITUDE_BUG_SELECT": "Selects the altitude reference for use with +/-", + "VSI_BUG_SELECT": "Selects the vertical speed reference for use with +/-", + "AIRSPEED_BUG_SELECT": "Selects the airspeed reference for use with +/-", + "AP_PITCH_REF_INC_UP": "Increments the pitch reference for pitch hold mode", + "AP_PITCH_REF_INC_DN": "Decrements the pitch reference for pitch hold mode", + "AP_PITCH_REF_SELECT": "Selects pitch reference for use with +/-", + "AP_ATT_HOLD": "Toggle attitude hold mode", + "AP_LOC_HOLD": "Toggles localizer (only), hold mode", + "AP_APR_HOLD": "Toggles approach hold (localizer and glide-slope),", + "AP_HDG_HOLD": "Toggles heading hold mode", + "AP_ALT_HOLD": "Toggles altitude hold mode", + "AP_WING_LEVELER": "Toggles wing leveler mode", + "AP_BC_HOLD": "Toggles the backcourse mode for the localizer hold", + "AP_NAV1_HOLD": "Toggles the nav hold mode", + "AP_MAX_BANK_INC": "Autopilot max bank angle increment.", + "AP_MAX_BANK_DEC": "Autopilot max bank angle decrement.", + "AP_N1_HOLD": "Autopilot, hold the N1 percentage at its current level.", + "AP_N1_REF_INC": "Increment the autopilot N1 reference.", + "AP_N1_REF_DEC": "Decrement the autopilot N1 reference.", + "AP_N1_REF_SET": "Sets the autopilot N1 reference.", + "FLY_BY_WIRE_ELAC_TOGGLE": "Turn on or off the fly by wire Elevators and Ailerons computer.", + "FLY_BY_WIRE_FAC_TOGGLE": "Turn on or off the fly by wire Flight Augmentation computer.", + "FLY_BY_WIRE_SEC_TOGGLE": "Turn on or off the fly by wire Spoilers and Elevators computer.", + "AP_VS_HOLD": "Toggle VS hold mode", + "FLIGHT_LEVEL_CHANGE": "Toggle FLC mode", + "COM_RADIO_SET": "Sets COM frequency (Must convert integer to BCD16 Hz),", + "NAV1_RADIO_SET": "Sets NAV 1 frequency (Must convert integer to BCD16 Hz),", + "NAV2_RADIO_SET": "Sets NAV 2 frequency (Must convert integer to BCD16 Hz),", + "ADF_SET": "Sets ADF frequency (Must convert integer to BCD16 Hz),", + "XPNDR_SET": "Sets transponder code (Must convert integer to BCD16),", + "VOR1_SET": "Sets OBS 1 (0 to 360),", + "VOR2_SET": "Sets OBS 2 (0 to 360),", + "DME1_TOGGLE": "Sets DME display to Nav 1", + "DME2_TOGGLE": "Sets DME display to Nav 2", + "RADIO_VOR1_IDENT_DISABLE": "Turns NAV 1 ID off", + "RADIO_VOR2_IDENT_DISABLE": "Turns NAV 2 ID off", + "RADIO_DME1_IDENT_DISABLE": "Turns DME 1 ID off", + "RADIO_DME2_IDENT_DISABLE": "Turns DME 2 ID off", + "RADIO_ADF_IDENT_DISABLE": "Turns ADF 1 ID off", + "RADIO_VOR1_IDENT_ENABLE": "Turns NAV 1 ID on", + "RADIO_VOR2_IDENT_ENABLE": "Turns NAV 2 ID on", + "RADIO_DME1_IDENT_ENABLE": "Turns DME 1 ID on", + "RADIO_DME2_IDENT_ENABLE": "Turns DME 2 ID on", + "RADIO_ADF_IDENT_ENABLE": "Turns ADF 1 ID on", + "RADIO_VOR1_IDENT_TOGGLE": "Toggles NAV 1 ID", + "RADIO_VOR2_IDENT_TOGGLE": "Toggles NAV 2 ID", + "RADIO_DME1_IDENT_TOGGLE": "Toggles DME 1 ID", + "RADIO_DME2_IDENT_TOGGLE": "Toggles DME 2 ID", + "RADIO_ADF_IDENT_TOGGLE": "Toggles ADF 1 ID", + "RADIO_VOR1_IDENT_SET": "Sets NAV 1 ID (on/off),", + "RADIO_VOR2_IDENT_SET": "Sets NAV 2 ID (on/off),", + "RADIO_DME1_IDENT_SET": "Sets DME 1 ID (on/off),", + "RADIO_DME2_IDENT_SET": "Sets DME 2 ID (on/off),", + "RADIO_ADF_IDENT_SET": "Sets ADF 1 ID (on/off),", + "ADF_CARD_INC": "Increments ADF card", + "ADF_CARD_DEC": "Decrements ADF card", + "ADF_CARD_SET": "Sets ADF card (0-360),", + "TOGGLE_DME": "Toggles between NAV 1 and NAV 2", + "TOGGLE_AVIONICS_MASTER": "Toggles the avionics master switch", + "COM_STBY_RADIO_SET": "Sets COM 1 standby frequency (Must convert integer to BCD16 Hz),", + "COM_STBY_RADIO_SWAP": "Swaps COM 1 frequency with standby", + "COM2_RADIO_SET": "Sets COM 2 frequency (BCD Hz),", + "COM2_STBY_RADIO_SET": "Sets COM 2 standby frequency (Must convert integer to BCD16 Hz),", + "COM2_RADIO_SWAP": "Swaps COM 2 frequency with standby", + "NAV1_STBY_SET": "Sets NAV 1 standby frequency (Must convert integer to BCD16 Hz),", + "NAV1_RADIO_SWAP": "Swaps NAV 1 frequency with standby", + "NAV2_STBY_SET": "Sets NAV 2 standby frequency (Must convert integer to BCD16 Hz),", + "NAV2_RADIO_SWAP": "Swaps NAV 2 frequency with standby", + "COM1_TRANSMIT_SELECT": "Selects COM 1 to transmit", + "COM2_TRANSMIT_SELECT": "Selects COM 2 to transmit", + "COM_RECEIVE_ALL_TOGGLE": "Toggles all COM radios to receive on", + "MARKER_SOUND_TOGGLE": "Toggles marker beacon sound on/off", + "ADF_COMPLETE_SET": "Sets ADF 1 frequency (Must convert integer to BCD16 Hz),", + "RADIO_ADF2_IDENT_DISABLE": "Turns ADF 2 ID off", + "RADIO_ADF2_IDENT_ENABLE": "Turns ADF 2 ID on", + "RADIO_ADF2_IDENT_TOGGLE": "Toggles ADF 2 ID", + "RADIO_ADF2_IDENT_SET": "Sets ADF 2 ID on/off (1,0),", + "FREQUENCY_SWAP": "Swaps frequency with standby on whichever NAV or COM radio is selected.", + "TOGGLE_GPS_DRIVES_NAV1": "Toggles between GPS and NAV 1 driving NAV 1 OBS display (and AP),", + "GPS_ACTIVATE_BUTTON": "Activates GPS Autopilot mode", + "GPS_POWER_BUTTON": "Toggles power button", + "GPS_NEAREST_BUTTON": "Selects Nearest Airport Page", + "GPS_OBS_BUTTON": "Toggles automatic sequencing of waypoints", + "GPS_MSG_BUTTON": "Toggles the Message Page", + "GPS_MSG_BUTTON_DOWN": "Triggers the pressing of the message button.", + "GPS_MSG_BUTTON_UP": "Triggers the release of the message button", + "GPS_FLIGHTPLAN_BUTTON": "Displays the programmed flightplan.", + "GPS_TERRAIN_BUTTON": "Displays terrain information on default display", + "GPS_PROCEDURE_BUTTON": "Displays the approach procedure page.", + "GPS_ZOOMIN_BUTTON": "Zooms in default display", + "GPS_ZOOMOUT_BUTTON": "Zooms out default display", + "GPS_DIRECTTO_BUTTON": "Brings up the \"Direct To\" page", + "GPS_MENU_BUTTON": "Brings up page to select active legs in a flightplan.", + "GPS_CLEAR_BUTTON": "Clears entered data on a page", + "GPS_CLEAR_ALL_BUTTON": "Clears all data immediately", + "GPS_CLEAR_BUTTON_DOWN": "Triggers the pressing of the Clear button", + "GPS_CLEAR_BUTTON_UP": "Triggers the release of the Clear button.", + "GPS_ENTER_BUTTON": "Approves entered data.", + "GPS_CURSOR_BUTTON": "Selects GPS cursor", + "GPS_GROUP_KNOB_INC": "Increments cursor", + "GPS_GROUP_KNOB_DEC": "Decrements cursor", + "GPS_PAGE_KNOB_INC": "Increments through pages", + "GPS_PAGE_KNOB_DEC": "Decrements through pages", + "DME_SELECT": "Selects one of the two DME systems (1,2),.", + "KOHLSMAN_SET": "Sets altimeter setting (Millibars * 16),", + "BAROMETRIC": "Syncs altimeter setting to sea level pressure, or 29.92 if above 18000 feet", + } + ATC Events: + { + "ATC": "Activates ATC window", + "ATC_MENU_1": "Selects ATC option 1, use other numbers for other options 2-10", + } + Other Miscellaneous Events: + { + "PARKING_BRAKES": "Toggles parking brake on/off", + "GEAR_PUMP": "Increments emergency gear extension", + "PITOT_HEAT_ON": "Turns pitot heat switch on", + "PITOT_HEAT_OFF": "Turns pitot heat switch off", + "GEAR_UP": "Sets gear handle in UP position", + "GEAR_DOWN": "Sets gear handle in DOWN position", + "TOGGLE_MASTER_BATTERY": "Toggles main battery switch", + "TOGGLE_MASTER_ALTERNATOR": "Toggles main alternator/generator switch", + "TOGGLE_ELECTRIC_VACUUM_PUMP": "Toggles backup electric vacuum pump", + "TOGGLE_ALTERNATE_STATIC": "Toggles alternate static pressure port", + "DECREASE_DECISION_HEIGHT": "Decrements decision height reference", + "INCREASE_DECISION_HEIGHT": "Increments decision height reference", + "TOGGLE_STRUCTURAL_DEICE": "Toggles structural deice switch", + "TOGGLE_PROPELLER_DEICE": "Toggles propeller deice switch", + "TOGGLE_ALTERNATOR1": "Toggles alternator/generator 1 switch", + "TOGGLE_ALTERNATOR2": "Toggles alternator/generator 2 switch", + "TOGGLE_ALTERNATOR3": "Toggles alternator/generator 3 switch", + "TOGGLE_ALTERNATOR4": "Toggles alternator/generator 4 switch", + "TOGGLE_MASTER_BATTERY_ALTERNATOR": "Toggles master battery and alternator switch", + "TOGGLE_AIRCRAFT_EXIT": "Toggles primary door open/close. Follow by KEY_SELECT_2, etc for subsequent doors.", + "SET_WING_FOLD": "Sets the wings into the folded position suitable for storage, typically on a carrier. Takes a value:\n 1 - fold wings,\n 0 - unfold wings", + "TOGGLE_TAIL_HOOK_HANDLE": "Toggles tail hook", + "TOGGLE_WATER_RUDDER": "Toggles water rudders", + "TOGGLE_PUSHBACK": "Toggles pushback.", + "TOGGLE_MASTER_IGNITION_SWITCH": "Toggles master ignition switch", + "TOGGLE_TAILWHEEL_LOCK": "Toggles tail wheel lock", + "ADD_FUEL_QUANTITY": "Adds fuel to the aircraft, 25% of capacity by default. 0 to 65535 (max fuel), can be passed.", + "RETRACT_FLOAT_SWITCH_DEC": "If the plane has retractable floats, moves the retract position from Extend to Neutral, or Neutral to Retract.", + "RETRACT_FLOAT_SWITCH_INC": "If the plane has retractable floats, moves the retract position from Retract to Neutral, or Neutral to Extend.", + "TOGGLE_WATER_BALLAST_VALVE": "Turn the water ballast valve on or off.", + "TOGGLE_VARIOMETER_SWITCH": "Turn the variometer on or off.", + "APU_STARTER": "Start up the auxiliary power unit (APU),.", + "APU_OFF_SWITCH": "Turn the APU off.", + "APU_GENERATOR_SWITCH_TOGGLE": "Turn the auxiliary generator on or off.", + "APU_GENERATOR_SWITCH_SET": "Set the auxiliary generator switch (0,1),.", + "HYDRAULIC_SWITCH_TOGGLE": "Turn the hydraulic switch on or off.", + "BLEED_AIR_SOURCE_CONTROL_INC": "Increases the bleed air source control.", + "BLEED_AIR_SOURCE_CONTROL_DEC": "Decreases the bleed air source control.", + "BLEED_AIR_SOURCE_CONTROL_SET": "Set to one of:\n 0: auto\n 1: off\n 2: apu\n 3: engines", + "TURBINE_IGNITION_SWITCH_TOGGLE": "Turn the turbine ignition switch on or off.", + "CABIN_NO_SMOKING_ALERT_SWITCH_TOGGLE": "Turn the \"No smoking\" alert on or off.", + "CABIN_SEATBELTS_ALERT_SWITCH_TOGGLE": "Turn the \"Fasten seatbelts\" alert on or off.", + "ANTISKID_BRAKES_TOGGLE": "Turn the anti-skid braking system on or off.", + "GPWS_SWITCH_TOGGLE": "Turn the g round proximity warning system (GPWS), on or off.", + "MANUAL_FUEL_PRESSURE_PUMP": "Activate the manual fuel pressure pump.", + "PAUSE_ON": "Turns pause on", + "PAUSE_OFF": "Turns pause off", + "SIM_RATE_INCR": "Increase sim rate", + "SIM_RATE_DECR": "Decrease sim rate", + "INVOKE_HELP": "Brings up Help system", + "FLIGHT_MAP": "Brings up flight map", + } + And here are some common data points to otabin aircraft state information using the get_data_from_sim function: + { + "GROUND_VELOCITY": ["Speed relative to the earths surface", b'Knots'], + "TOTAL_WORLD_VELOCITY": ["Speed relative to the earths center", b'Feet per second'], + "ACCELERATION_WORLD_X": ["Acceleration relative to earth, in east/west direction", b'Feet per second squared'], + "ACCELERATION_WORLD_Y": ["Acceleration relative to earch, in vertical direction", b'Feet per second squared'], + "ACCELERATION_WORLD_Z": ["Acceleration relative to earth, in north/south direction", b'Feet per second squared'], + "ACCELERATION_BODY_X": ["Acceleration relative to aircraft axix, in east/west direction", b'Feet per second squared'], + "ACCELERATION_BODY_Y": ["Acceleration relative to aircraft axis, in vertical direction", b'Feet per second squared',], + "ACCELERATION_BODY_Z": ["Acceleration relative to aircraft axis, in north/south direction", b'Feet per second squared'], + "ROTATION_VELOCITY_BODY_X": ["Rotation relative to aircraft axis", b'Feet per second'], + "ROTATION_VELOCITY_BODY_Y": ["Rotation relative to aircraft axis", b'Feet per second'], + "ROTATION_VELOCITY_BODY_Z": ["Rotation relative to aircraft axis", b'Feet per second'], + "RELATIVE_WIND_VELOCITY_BODY_X": ["Lateral speed relative to wind", b'Feet per second'], + "RELATIVE_WIND_VELOCITY_BODY_Y": ["Vertical speed relative to wind", b'Feet per second'], + "RELATIVE_WIND_VELOCITY_BODY_Z": ["Longitudinal speed relative to wind", b'Feet per second'], + "PLANE_ALT_ABOVE_GROUND": ["Altitude above the surface", b'Feet'], + "PLANE_LATITUDE": ["Latitude of aircraft, North is positive, South negative", b'Degrees'], + "PLANE_LONGITUDE": ["Longitude of aircraft, East is positive, West negative", b'Degrees'], + "PLANE_ALTITUDE": ["Altitude of aircraft", b'Feet'], + "PLANE_PITCH_DEGREES": ["Pitch angle, although the name mentions degrees the units used are radians", b'Radians'], + "PLANE_BANK_DEGREES": ["Bank angle, although the name mentions degrees the units used are radians", b'Radians'], + "PLANE_HEADING_DEGREES_TRUE": ["Heading relative to true north, although the name mentions degrees the units used are radians", b'Radians'], + "PLANE_HEADING_DEGREES_MAGNETIC": ["Heading relative to magnetic north, although the name mentions degrees the units used are radians", b'Radians'], + "MAGVAR": ["Magnetic variation", b'Degrees'], + "GROUND_ALTITUDE": ["Altitude of surface", b'Meters'], + "SIM_ON_GROUND": ["On ground flag", b'Bool'], + "INCIDENCE_ALPHA": ["Angle of attack", b'Radians'], + "INCIDENCE_BETA": ["Sideslip angle", b'Radians'], + "AIRSPEED_TRUE": ["True airspeed", b'Knots'], + "AIRSPEED_INDICATED": ["Indicated airspeed", b'Knots'], + "AIRSPEED_TRUE_CALIBRATE": ["Angle of True calibration scale on airspeed indicator", b'Degrees'], + "AIRSPEED_BARBER_POLE": ["Redline airspeed (dynamic on some aircraft)", b'Knots'], + "AIRSPEED_MACH": ["Current mach", b'Mach'], + "VERTICAL_SPEED": ["Vertical speed indication", b'feet/minute'], + "MACH_MAX_OPERATE": ["Maximum design mach", b'Mach'], + "STALL_WARNING": ["Stall warning state", b'Bool'], + "OVERSPEED_WARNING": ["Overspeed warning state", b'Bool'], + "INDICATED_ALTITUDE": ["Altimeter indication", b'Feet'], + "ATTITUDE_INDICATOR_PITCH_DEGREES": ["AI pitch indication", b'Radians'], + "ATTITUDE_INDICATOR_BANK_DEGREES": ["AI bank indication", b'Radians'], + "ATTITUDE_BARS_POSITION": ["AI reference pitch reference bars", b'Percent Over 100'], + "ATTITUDE_CAGE": ["AI caged state", b'Bool'], + "WISKEY_COMPASS_INDICATION_DEGREES": ["Magnetic compass indication", b'Degrees'], + "PLANE_HEADING_DEGREES_GYRO": ["Heading indicator (directional gyro) indication", b'Radians'], + "HEADING_INDICATOR": ["Heading indicator (directional gyro) indication", b'Radians'], + "GYRO_DRIFT_ERROR": ["Angular error of heading indicator", b'Radians',], + "DELTA_HEADING_RATE": ["Rate of turn of heading indicator", b'Radians per second'], + "TURN_COORDINATOR_BALL": ["Turn coordinator ball position", b'Position'], + "ANGLE_OF_ATTACK_INDICATOR": ["AoA indication", b'Radians'], + "RADIO_HEIGHT": ["Radar altitude", b'Feet'], + "ABSOLUTE_TIME": ["Time, as referenced from 12:00 AM January 1, 0000", b'Seconds'], + "ZULU_TIME": ["Greenwich Mean Time (GMT)", b'Seconds'], + "ZULU_DAY_OF_WEEK": ["GMT day of week", b'Number'], + "ZULU_DAY_OF_MONTH": ["GMT day of month", b'Number'], + "ZULU_MONTH_OF_YEAR": ["GMT month of year", b'Number',], + "ZULU_DAY_OF_YEAR": ["GMT day of year", b'Number'], + "ZULU_YEAR": ["GMT year", b'Number'], + "LOCAL_TIME": ["Local time", b'Seconds'], + "LOCAL_DAY_OF_WEEK": ["Local day of week", b'Number'], + "LOCAL_DAY_OF_MONTH": ["Local day of month", b'Number',], + "LOCAL_MONTH_OF_YEAR": ["Local month of year", b'Number',], + "LOCAL_DAY_OF_YEAR": ["Local day of year", b'Number'], + "LOCAL_YEAR": ["Local year", b'Number'], + "TIME_ZONE_OFFSET": ["Local time difference from GMT", b'Seconds'], + "ATC_TYPE": ["Type used by ATC", b'String'], + "ATC_MODEL": ["Model used by ATC", b'String'], + "ATC_ID": ["ID used by ATC", b'String'], + "ATC_AIRLINE": ["Airline used by ATC", b'String'], + "ATC_FLIGHT_NUMBER": ["Flight Number used by ATC", b'String'], + "TITLE": ["Title from aircraft.cfg", b'String'], + "HSI_STATION_IDENT": ["Tuned station identifier", b'String'], + "GPS_WP_PREV_ID": ["ID of previous GPS waypoint", b'String'], + "GPS_WP_NEXT_ID": ["ID of next GPS waypoint", b'String'], + "GPS_APPROACH_AIRPORT_ID": ["ID of airport", b'String'], + "GPS_APPROACH_APPROACH_ID": ["ID of approach", b'String'], + "GPS_APPROACH_TRANSITION_ID": ["ID of approach transition", b'String'], + "GPS_ETA": ["Estimated time of arrival at destination in seconds"] + "GPS_ETE": ["Estimated time en route to destination in seconds"] + "GPS_TARGET_DISTANCE": ["Estimated distance to destination in meters"] + "NAV_LOC_AIRPORT_IDENT": ["Airport ICAO code for airport tuned in Nav radio"] + "AMBIENT_DENSITY": ["Ambient density", b'Slugs per cubic feet'], + "AMBIENT_TEMPERATURE": ["Ambient temperature", b'Celsius'], + "AMBIENT_PRESSURE": ["Ambient pressure", b'inHg'], + "AMBIENT_WIND_VELOCITY": ["Wind velocity", b'Knots'], + "AMBIENT_WIND_DIRECTION": ["Wind direction", b'Degrees'], + "AMBIENT PRECIP STATE" : [State of current precipitation, b'String'], + "BAROMETER_PRESSURE": ["Barometric pressure", b'Millibars'], + "SEA_LEVEL_PRESSURE": ["Barometric pressure at sea level", b'Millibars'], + "TOTAL_AIR_TEMPERATURE": ["Total air temperature is the air temperature at the front of the aircraft where the ram pressure from the speed of the aircraft is taken into account.", b'Celsius'], + "AMBIENT_IN_CLOUD": ["True if the aircraft is in a cloud.", b'Bool'], + "AMBIENT_VISIBILITY": ["Ambient visibility", b'Meters'], + "GENERAL_ENG_RPM:index": ["Engine rpm", b'Rpm'], + "GENERAL_ENG_PCT_MAX_RPM:index": ["Percent of max rated rpm", b'Percent'], + "GENERAL_ENG_EXHAUST_GAS_TEMPERATURE:index": ["Engine exhaust gas temperature.", b'Rankine'], + "GENERAL_ENG_OIL_PRESSURE:index": ["Engine oil pressure", b'Psf'], + "GENERAL_ENG_OIL_TEMPERATURE:index": ["Engine oil temperature", b'Rankine'], + "GENERAL_ENG_FUEL_PRESSURE:index": ["Engine fuel pressure", b'Psi'], + "ENG_OIL_TEMPERATURE:index": ["Engine oil temperature", b'Rankine',], + "FUEL_TOTAL_QUANTITY": ["Current fuel in volume", b'Gallons'], + "FUEL_TOTAL_CAPACITY": ["Total fuel capacity of the aircraft", b'Gallons'], + } +custom_properties: + - hint: Whether to try to autostart data monitoring mode (tour guide mode), which automatically monitors for latitude, longitude and altitude. This can also be toggled with a voice command to start or end data monitoring mode. + id: autostart_data_monitoring_loop_mode + name: Autostart tour guide mode + property_type: boolean + required: true + value: false + - hint: Minimum interval in seconds before data monitoring loop will run again, in seconds. + id: min_data_monitoring_seconds + name: Minimum monitoring interval + property_type: number + required: true + value: 60 + - hint: Maximum interval in seconds before data monitoring loop will run again, in seconds. + id: max_data_monitoring_seconds + name: Maximum monitoring interval + property_type: number + required: true + value: 360 + - hint: The backstory to use for data monitoring mode. Leave blank if you just want to use what is already in your wingman's backstory. + id: data_monitoring_backstory + name: Tour guide mode backstory + property_type: textarea + required: false + value: | + You are a friendly copilot in the plane with the user, the pilot. Casually comment on the following information in a way that keeps your personality and role play. If it is data about a place, comment about just having flown over the place (or if the plane is on the ground, about being at the place) and provide some brief commentary on the place in an engaging tone. Keep your remarks succinct, and avoid directly talking about latitudes and longitudes: \ No newline at end of file diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/INSTALLER b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/INSTALLER new file mode 100644 index 00000000..a1b589e3 --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/LICENSE b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/LICENSE new file mode 100644 index 00000000..0ad25db4 --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/LICENSE @@ -0,0 +1,661 @@ + GNU AFFERO GENERAL PUBLIC LICENSE + Version 3, 19 November 2007 + + Copyright (C) 2007 Free Software Foundation, Inc. + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + + Preamble + + The GNU Affero General Public License is a free, copyleft license for +software and other kinds of works, specifically designed to ensure +cooperation with the community in the case of network server software. + + The licenses for most software and other practical works are designed +to take away your freedom to share and change the works. By contrast, +our General Public Licenses are intended to guarantee your freedom to +share and change all versions of a program--to make sure it remains free +software for all its users. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +them if you wish), that you receive source code or can get it if you +want it, that you can change the software or use pieces of it in new +free programs, and that you know you can do these things. + + Developers that use our General Public Licenses protect your rights +with two steps: (1) assert copyright on the software, and (2) offer +you this License which gives you legal permission to copy, distribute +and/or modify the software. + + A secondary benefit of defending all users' freedom is that +improvements made in alternate versions of the program, if they +receive widespread use, become available for other developers to +incorporate. Many developers of free software are heartened and +encouraged by the resulting cooperation. However, in the case of +software used on network servers, this result may fail to come about. +The GNU General Public License permits making a modified version and +letting the public access it on a server without ever releasing its +source code to the public. + + The GNU Affero General Public License is designed specifically to +ensure that, in such cases, the modified source code becomes available +to the community. It requires the operator of a network server to +provide the source code of the modified version running there to the +users of that server. Therefore, public use of a modified version, on +a publicly accessible server, gives the public access to the source +code of the modified version. + + An older license, called the Affero General Public License and +published by Affero, was designed to accomplish similar goals. This is +a different license, not a version of the Affero GPL, but Affero has +released a new version of the Affero GPL which permits relicensing under +this license. + + The precise terms and conditions for copying, distribution and +modification follow. + + TERMS AND CONDITIONS + + 0. Definitions. + + "This License" refers to version 3 of the GNU Affero General Public License. + + "Copyright" also means copyright-like laws that apply to other kinds of +works, such as semiconductor masks. + + "The Program" refers to any copyrightable work licensed under this +License. Each licensee is addressed as "you". "Licensees" and +"recipients" may be individuals or organizations. + + To "modify" a work means to copy from or adapt all or part of the work +in a fashion requiring copyright permission, other than the making of an +exact copy. The resulting work is called a "modified version" of the +earlier work or a work "based on" the earlier work. + + A "covered work" means either the unmodified Program or a work based +on the Program. + + To "propagate" a work means to do anything with it that, without +permission, would make you directly or secondarily liable for +infringement under applicable copyright law, except executing it on a +computer or modifying a private copy. Propagation includes copying, +distribution (with or without modification), making available to the +public, and in some countries other activities as well. + + To "convey" a work means any kind of propagation that enables other +parties to make or receive copies. Mere interaction with a user through +a computer network, with no transfer of a copy, is not conveying. + + An interactive user interface displays "Appropriate Legal Notices" +to the extent that it includes a convenient and prominently visible +feature that (1) displays an appropriate copyright notice, and (2) +tells the user that there is no warranty for the work (except to the +extent that warranties are provided), that licensees may convey the +work under this License, and how to view a copy of this License. If +the interface presents a list of user commands or options, such as a +menu, a prominent item in the list meets this criterion. + + 1. Source Code. + + The "source code" for a work means the preferred form of the work +for making modifications to it. "Object code" means any non-source +form of a work. + + A "Standard Interface" means an interface that either is an official +standard defined by a recognized standards body, or, in the case of +interfaces specified for a particular programming language, one that +is widely used among developers working in that language. + + The "System Libraries" of an executable work include anything, other +than the work as a whole, that (a) is included in the normal form of +packaging a Major Component, but which is not part of that Major +Component, and (b) serves only to enable use of the work with that +Major Component, or to implement a Standard Interface for which an +implementation is available to the public in source code form. A +"Major Component", in this context, means a major essential component +(kernel, window system, and so on) of the specific operating system +(if any) on which the executable work runs, or a compiler used to +produce the work, or an object code interpreter used to run it. + + The "Corresponding Source" for a work in object code form means all +the source code needed to generate, install, and (for an executable +work) run the object code and to modify the work, including scripts to +control those activities. However, it does not include the work's +System Libraries, or general-purpose tools or generally available free +programs which are used unmodified in performing those activities but +which are not part of the work. For example, Corresponding Source +includes interface definition files associated with source files for +the work, and the source code for shared libraries and dynamically +linked subprograms that the work is specifically designed to require, +such as by intimate data communication or control flow between those +subprograms and other parts of the work. + + The Corresponding Source need not include anything that users +can regenerate automatically from other parts of the Corresponding +Source. + + The Corresponding Source for a work in source code form is that +same work. + + 2. Basic Permissions. + + All rights granted under this License are granted for the term of +copyright on the Program, and are irrevocable provided the stated +conditions are met. This License explicitly affirms your unlimited +permission to run the unmodified Program. The output from running a +covered work is covered by this License only if the output, given its +content, constitutes a covered work. This License acknowledges your +rights of fair use or other equivalent, as provided by copyright law. + + You may make, run and propagate covered works that you do not +convey, without conditions so long as your license otherwise remains +in force. You may convey covered works to others for the sole purpose +of having them make modifications exclusively for you, or provide you +with facilities for running those works, provided that you comply with +the terms of this License in conveying all material for which you do +not control copyright. Those thus making or running the covered works +for you must do so exclusively on your behalf, under your direction +and control, on terms that prohibit them from making any copies of +your copyrighted material outside their relationship with you. + + Conveying under any other circumstances is permitted solely under +the conditions stated below. Sublicensing is not allowed; section 10 +makes it unnecessary. + + 3. Protecting Users' Legal Rights From Anti-Circumvention Law. + + No covered work shall be deemed part of an effective technological +measure under any applicable law fulfilling obligations under article +11 of the WIPO copyright treaty adopted on 20 December 1996, or +similar laws prohibiting or restricting circumvention of such +measures. + + When you convey a covered work, you waive any legal power to forbid +circumvention of technological measures to the extent such circumvention +is effected by exercising rights under this License with respect to +the covered work, and you disclaim any intention to limit operation or +modification of the work as a means of enforcing, against the work's +users, your or third parties' legal rights to forbid circumvention of +technological measures. + + 4. Conveying Verbatim Copies. + + You may convey verbatim copies of the Program's source code as you +receive it, in any medium, provided that you conspicuously and +appropriately publish on each copy an appropriate copyright notice; +keep intact all notices stating that this License and any +non-permissive terms added in accord with section 7 apply to the code; +keep intact all notices of the absence of any warranty; and give all +recipients a copy of this License along with the Program. + + You may charge any price or no price for each copy that you convey, +and you may offer support or warranty protection for a fee. + + 5. Conveying Modified Source Versions. + + You may convey a work based on the Program, or the modifications to +produce it from the Program, in the form of source code under the +terms of section 4, provided that you also meet all of these conditions: + + a) The work must carry prominent notices stating that you modified + it, and giving a relevant date. + + b) The work must carry prominent notices stating that it is + released under this License and any conditions added under section + 7. This requirement modifies the requirement in section 4 to + "keep intact all notices". + + c) You must license the entire work, as a whole, under this + License to anyone who comes into possession of a copy. This + License will therefore apply, along with any applicable section 7 + additional terms, to the whole of the work, and all its parts, + regardless of how they are packaged. This License gives no + permission to license the work in any other way, but it does not + invalidate such permission if you have separately received it. + + d) If the work has interactive user interfaces, each must display + Appropriate Legal Notices; however, if the Program has interactive + interfaces that do not display Appropriate Legal Notices, your + work need not make them do so. + + A compilation of a covered work with other separate and independent +works, which are not by their nature extensions of the covered work, +and which are not combined with it such as to form a larger program, +in or on a volume of a storage or distribution medium, is called an +"aggregate" if the compilation and its resulting copyright are not +used to limit the access or legal rights of the compilation's users +beyond what the individual works permit. Inclusion of a covered work +in an aggregate does not cause this License to apply to the other +parts of the aggregate. + + 6. Conveying Non-Source Forms. + + You may convey a covered work in object code form under the terms +of sections 4 and 5, provided that you also convey the +machine-readable Corresponding Source under the terms of this License, +in one of these ways: + + a) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by the + Corresponding Source fixed on a durable physical medium + customarily used for software interchange. + + b) Convey the object code in, or embodied in, a physical product + (including a physical distribution medium), accompanied by a + written offer, valid for at least three years and valid for as + long as you offer spare parts or customer support for that product + model, to give anyone who possesses the object code either (1) a + copy of the Corresponding Source for all the software in the + product that is covered by this License, on a durable physical + medium customarily used for software interchange, for a price no + more than your reasonable cost of physically performing this + conveying of source, or (2) access to copy the + Corresponding Source from a network server at no charge. + + c) Convey individual copies of the object code with a copy of the + written offer to provide the Corresponding Source. This + alternative is allowed only occasionally and noncommercially, and + only if you received the object code with such an offer, in accord + with subsection 6b. + + d) Convey the object code by offering access from a designated + place (gratis or for a charge), and offer equivalent access to the + Corresponding Source in the same way through the same place at no + further charge. You need not require recipients to copy the + Corresponding Source along with the object code. If the place to + copy the object code is a network server, the Corresponding Source + may be on a different server (operated by you or a third party) + that supports equivalent copying facilities, provided you maintain + clear directions next to the object code saying where to find the + Corresponding Source. Regardless of what server hosts the + Corresponding Source, you remain obligated to ensure that it is + available for as long as needed to satisfy these requirements. + + e) Convey the object code using peer-to-peer transmission, provided + you inform other peers where the object code and Corresponding + Source of the work are being offered to the general public at no + charge under subsection 6d. + + A separable portion of the object code, whose source code is excluded +from the Corresponding Source as a System Library, need not be +included in conveying the object code work. + + A "User Product" is either (1) a "consumer product", which means any +tangible personal property which is normally used for personal, family, +or household purposes, or (2) anything designed or sold for incorporation +into a dwelling. In determining whether a product is a consumer product, +doubtful cases shall be resolved in favor of coverage. For a particular +product received by a particular user, "normally used" refers to a +typical or common use of that class of product, regardless of the status +of the particular user or of the way in which the particular user +actually uses, or expects or is expected to use, the product. A product +is a consumer product regardless of whether the product has substantial +commercial, industrial or non-consumer uses, unless such uses represent +the only significant mode of use of the product. + + "Installation Information" for a User Product means any methods, +procedures, authorization keys, or other information required to install +and execute modified versions of a covered work in that User Product from +a modified version of its Corresponding Source. The information must +suffice to ensure that the continued functioning of the modified object +code is in no case prevented or interfered with solely because +modification has been made. + + If you convey an object code work under this section in, or with, or +specifically for use in, a User Product, and the conveying occurs as +part of a transaction in which the right of possession and use of the +User Product is transferred to the recipient in perpetuity or for a +fixed term (regardless of how the transaction is characterized), the +Corresponding Source conveyed under this section must be accompanied +by the Installation Information. But this requirement does not apply +if neither you nor any third party retains the ability to install +modified object code on the User Product (for example, the work has +been installed in ROM). + + The requirement to provide Installation Information does not include a +requirement to continue to provide support service, warranty, or updates +for a work that has been modified or installed by the recipient, or for +the User Product in which it has been modified or installed. Access to a +network may be denied when the modification itself materially and +adversely affects the operation of the network or violates the rules and +protocols for communication across the network. + + Corresponding Source conveyed, and Installation Information provided, +in accord with this section must be in a format that is publicly +documented (and with an implementation available to the public in +source code form), and must require no special password or key for +unpacking, reading or copying. + + 7. Additional Terms. + + "Additional permissions" are terms that supplement the terms of this +License by making exceptions from one or more of its conditions. +Additional permissions that are applicable to the entire Program shall +be treated as though they were included in this License, to the extent +that they are valid under applicable law. If additional permissions +apply only to part of the Program, that part may be used separately +under those permissions, but the entire Program remains governed by +this License without regard to the additional permissions. + + When you convey a copy of a covered work, you may at your option +remove any additional permissions from that copy, or from any part of +it. (Additional permissions may be written to require their own +removal in certain cases when you modify the work.) You may place +additional permissions on material, added by you to a covered work, +for which you have or can give appropriate copyright permission. + + Notwithstanding any other provision of this License, for material you +add to a covered work, you may (if authorized by the copyright holders of +that material) supplement the terms of this License with terms: + + a) Disclaiming warranty or limiting liability differently from the + terms of sections 15 and 16 of this License; or + + b) Requiring preservation of specified reasonable legal notices or + author attributions in that material or in the Appropriate Legal + Notices displayed by works containing it; or + + c) Prohibiting misrepresentation of the origin of that material, or + requiring that modified versions of such material be marked in + reasonable ways as different from the original version; or + + d) Limiting the use for publicity purposes of names of licensors or + authors of the material; or + + e) Declining to grant rights under trademark law for use of some + trade names, trademarks, or service marks; or + + f) Requiring indemnification of licensors and authors of that + material by anyone who conveys the material (or modified versions of + it) with contractual assumptions of liability to the recipient, for + any liability that these contractual assumptions directly impose on + those licensors and authors. + + All other non-permissive additional terms are considered "further +restrictions" within the meaning of section 10. If the Program as you +received it, or any part of it, contains a notice stating that it is +governed by this License along with a term that is a further +restriction, you may remove that term. If a license document contains +a further restriction but permits relicensing or conveying under this +License, you may add to a covered work material governed by the terms +of that license document, provided that the further restriction does +not survive such relicensing or conveying. + + If you add terms to a covered work in accord with this section, you +must place, in the relevant source files, a statement of the +additional terms that apply to those files, or a notice indicating +where to find the applicable terms. + + Additional terms, permissive or non-permissive, may be stated in the +form of a separately written license, or stated as exceptions; +the above requirements apply either way. + + 8. Termination. + + You may not propagate or modify a covered work except as expressly +provided under this License. Any attempt otherwise to propagate or +modify it is void, and will automatically terminate your rights under +this License (including any patent licenses granted under the third +paragraph of section 11). + + However, if you cease all violation of this License, then your +license from a particular copyright holder is reinstated (a) +provisionally, unless and until the copyright holder explicitly and +finally terminates your license, and (b) permanently, if the copyright +holder fails to notify you of the violation by some reasonable means +prior to 60 days after the cessation. + + Moreover, your license from a particular copyright holder is +reinstated permanently if the copyright holder notifies you of the +violation by some reasonable means, this is the first time you have +received notice of violation of this License (for any work) from that +copyright holder, and you cure the violation prior to 30 days after +your receipt of the notice. + + Termination of your rights under this section does not terminate the +licenses of parties who have received copies or rights from you under +this License. If your rights have been terminated and not permanently +reinstated, you do not qualify to receive new licenses for the same +material under section 10. + + 9. Acceptance Not Required for Having Copies. + + You are not required to accept this License in order to receive or +run a copy of the Program. Ancillary propagation of a covered work +occurring solely as a consequence of using peer-to-peer transmission +to receive a copy likewise does not require acceptance. However, +nothing other than this License grants you permission to propagate or +modify any covered work. These actions infringe copyright if you do +not accept this License. Therefore, by modifying or propagating a +covered work, you indicate your acceptance of this License to do so. + + 10. Automatic Licensing of Downstream Recipients. + + Each time you convey a covered work, the recipient automatically +receives a license from the original licensors, to run, modify and +propagate that work, subject to this License. You are not responsible +for enforcing compliance by third parties with this License. + + An "entity transaction" is a transaction transferring control of an +organization, or substantially all assets of one, or subdividing an +organization, or merging organizations. If propagation of a covered +work results from an entity transaction, each party to that +transaction who receives a copy of the work also receives whatever +licenses to the work the party's predecessor in interest had or could +give under the previous paragraph, plus a right to possession of the +Corresponding Source of the work from the predecessor in interest, if +the predecessor has it or can get it with reasonable efforts. + + You may not impose any further restrictions on the exercise of the +rights granted or affirmed under this License. For example, you may +not impose a license fee, royalty, or other charge for exercise of +rights granted under this License, and you may not initiate litigation +(including a cross-claim or counterclaim in a lawsuit) alleging that +any patent claim is infringed by making, using, selling, offering for +sale, or importing the Program or any portion of it. + + 11. Patents. + + A "contributor" is a copyright holder who authorizes use under this +License of the Program or a work on which the Program is based. The +work thus licensed is called the contributor's "contributor version". + + A contributor's "essential patent claims" are all patent claims +owned or controlled by the contributor, whether already acquired or +hereafter acquired, that would be infringed by some manner, permitted +by this License, of making, using, or selling its contributor version, +but do not include claims that would be infringed only as a +consequence of further modification of the contributor version. For +purposes of this definition, "control" includes the right to grant +patent sublicenses in a manner consistent with the requirements of +this License. + + Each contributor grants you a non-exclusive, worldwide, royalty-free +patent license under the contributor's essential patent claims, to +make, use, sell, offer for sale, import and otherwise run, modify and +propagate the contents of its contributor version. + + In the following three paragraphs, a "patent license" is any express +agreement or commitment, however denominated, not to enforce a patent +(such as an express permission to practice a patent or covenant not to +sue for patent infringement). To "grant" such a patent license to a +party means to make such an agreement or commitment not to enforce a +patent against the party. + + If you convey a covered work, knowingly relying on a patent license, +and the Corresponding Source of the work is not available for anyone +to copy, free of charge and under the terms of this License, through a +publicly available network server or other readily accessible means, +then you must either (1) cause the Corresponding Source to be so +available, or (2) arrange to deprive yourself of the benefit of the +patent license for this particular work, or (3) arrange, in a manner +consistent with the requirements of this License, to extend the patent +license to downstream recipients. "Knowingly relying" means you have +actual knowledge that, but for the patent license, your conveying the +covered work in a country, or your recipient's use of the covered work +in a country, would infringe one or more identifiable patents in that +country that you have reason to believe are valid. + + If, pursuant to or in connection with a single transaction or +arrangement, you convey, or propagate by procuring conveyance of, a +covered work, and grant a patent license to some of the parties +receiving the covered work authorizing them to use, propagate, modify +or convey a specific copy of the covered work, then the patent license +you grant is automatically extended to all recipients of the covered +work and works based on it. + + A patent license is "discriminatory" if it does not include within +the scope of its coverage, prohibits the exercise of, or is +conditioned on the non-exercise of one or more of the rights that are +specifically granted under this License. You may not convey a covered +work if you are a party to an arrangement with a third party that is +in the business of distributing software, under which you make payment +to the third party based on the extent of your activity of conveying +the work, and under which the third party grants, to any of the +parties who would receive the covered work from you, a discriminatory +patent license (a) in connection with copies of the covered work +conveyed by you (or copies made from those copies), or (b) primarily +for and in connection with specific products or compilations that +contain the covered work, unless you entered into that arrangement, +or that patent license was granted, prior to 28 March 2007. + + Nothing in this License shall be construed as excluding or limiting +any implied license or other defenses to infringement that may +otherwise be available to you under applicable patent law. + + 12. No Surrender of Others' Freedom. + + If conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot convey a +covered work so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you may +not convey it at all. For example, if you agree to terms that obligate you +to collect a royalty for further conveying from those to whom you convey +the Program, the only way you could satisfy both those terms and this +License would be to refrain entirely from conveying the Program. + + 13. Remote Network Interaction; Use with the GNU General Public License. + + Notwithstanding any other provision of this License, if you modify the +Program, your modified version must prominently offer all users +interacting with it remotely through a computer network (if your version +supports such interaction) an opportunity to receive the Corresponding +Source of your version by providing access to the Corresponding Source +from a network server at no charge, through some standard or customary +means of facilitating copying of software. This Corresponding Source +shall include the Corresponding Source for any work covered by version 3 +of the GNU General Public License that is incorporated pursuant to the +following paragraph. + + Notwithstanding any other provision of this License, you have +permission to link or combine any covered work with a work licensed +under version 3 of the GNU General Public License into a single +combined work, and to convey the resulting work. The terms of this +License will continue to apply to the part which is the covered work, +but the work with which it is combined will remain governed by version +3 of the GNU General Public License. + + 14. Revised Versions of this License. + + The Free Software Foundation may publish revised and/or new versions of +the GNU Affero General Public License from time to time. Such new versions +will be similar in spirit to the present version, but may differ in detail to +address new problems or concerns. + + Each version is given a distinguishing version number. If the +Program specifies that a certain numbered version of the GNU Affero General +Public License "or any later version" applies to it, you have the +option of following the terms and conditions either of that numbered +version or of any later version published by the Free Software +Foundation. If the Program does not specify a version number of the +GNU Affero General Public License, you may choose any version ever published +by the Free Software Foundation. + + If the Program specifies that a proxy can decide which future +versions of the GNU Affero General Public License can be used, that proxy's +public statement of acceptance of a version permanently authorizes you +to choose that version for the Program. + + Later license versions may give you additional or different +permissions. However, no additional obligations are imposed on any +author or copyright holder as a result of your choosing to follow a +later version. + + 15. Disclaimer of Warranty. + + THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY +APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT +HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY +OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, +THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM +IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF +ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. Limitation of Liability. + + IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING +WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS +THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY +GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE +USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF +DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD +PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), +EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF +SUCH DAMAGES. + + 17. Interpretation of Sections 15 and 16. + + If the disclaimer of warranty and limitation of liability provided +above cannot be given local legal effect according to their terms, +reviewing courts shall apply local law that most closely approximates +an absolute waiver of all civil liability in connection with the +Program, unless a warranty or assumption of liability accompanies a +copy of the Program in return for a fee. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Programs + + If you develop a new program, and you want it to be of the greatest +possible use to the public, the best way to achieve this is to make it +free software which everyone can redistribute and change under these terms. + + To do so, attach the following notices to the program. It is safest +to attach them to the start of each source file to most effectively +state the exclusion of warranty; and each file should have at least +the "copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU Affero General Public License as published + by the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU Affero General Public License for more details. + + You should have received a copy of the GNU Affero General Public License + along with this program. If not, see . + +Also add information on how to contact you by electronic and paper mail. + + If your software can interact with users remotely through a computer +network, you should also make sure that it provides a way for users to +get its source. For example, if your program is a web application, its +interface could display a "Source" link that leads users to an archive +of the code. There are many ways you could offer source, and different +solutions will be better for different programs; see section 13 for the +specific requirements. + + You should also get your employer (if you work as a programmer) or school, +if any, to sign a "copyright disclaimer" for the program, if necessary. +For more information on this, and how to apply and follow the GNU AGPL, see +. diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/METADATA b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/METADATA new file mode 100644 index 00000000..bd4c2782 --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/METADATA @@ -0,0 +1,220 @@ +Metadata-Version: 2.1 +Name: SimConnect +Version: 0.4.26 +Summary: Adds a pythonic wrapper for SimConnect SDK. +Home-page: https://github.com/odwdinc/Python-SimConnect +Author: Anthony Pray +Author-email: anthony.pray@gmail.com +Maintainer: Anthony Pray +Maintainer-email: anthony.pray@gmail.com +License: AGPL 3.0 +Keywords: ctypes,FlightSim,SimConnect,Flight,Simulator +Classifier: Development Status :: 2 - Pre-Alpha +Classifier: Environment :: Console +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+) +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 3 +Classifier: Programming Language :: Python :: 3.6 +Classifier: Programming Language :: Python :: 3.7 +Classifier: Programming Language :: Python :: 3.8 +Classifier: Programming Language :: Python :: Implementation :: CPython +Requires-Python: >=3.6 +Description-Content-Type: text/markdown +License-File: LICENSE + +[![PyPI version](https://badge.fury.io/py/SimConnect.svg)](https://badge.fury.io/py/SimConnect) +# Python-SimConnect + +Python interface for Microsoft Flight Simulator 2020 (MSFS2020) using SimConnect + +This library allows Python scripts to read and set variables within MSFS2020 and trigger events within the simulation. + +It also includes, as an example, "Cockpit Companion", a flask mini http server which runs locally. It provides a web UI with a moving map and simulation variables. It also provides simulation data in JSON format in response to REST API requests. + +Full documentation for this example can be found at [https://msfs2020.cc](https://msfs2020.cc) and it is included in a standalone repo here on Github as [MSFS2020-cockpit-companion](https://github.com/hankhank10/MSFS2020-cockpit-companion). + + +## Mobiflight Simconnect events: + +Yes this supports the new [SimConnect commands that DocMoebiuz](https://forums.flightsimulator.com/t/full-g1000-control-now-with-mobiflight/348509) of [MobiFlight](https://www.mobiflight.com/en/index.html) developed. +A full list of [commands and install instructions](https://pastebin.com/fMdB7at2) + +At this time MobiFlight SimConnect commands are not include in the AircraftEvents class and as so the AircraftEvents.find() and AircraftEvents.get() will not work. You will need to pass the Event ID to a new Event class as the Example below shows. + + +```py +from SimConnect import * +# Create SimConnect link +sm = SimConnect() +# Creat a function to call the MobiFlight AS1000_MFD_SOFTKEYS_3 event. +Sk3 = Event(b'MobiFlight.AS1000_MFD_SOFTKEYS_3', sm) +# Call the Event. +Sk3() +sm.exit() +quit() +``` + +## Python interface example + +```py +from SimConnect import * + +# Create SimConnect link +sm = SimConnect() +# Note the default _time is 2000 to be refreshed every 2 seconds +aq = AircraftRequests(sm, _time=2000) +# Use _time=ms where ms is the time in milliseconds to cache the data. +# Setting ms to 0 will disable data caching and always pull new data from the sim. +# There is still a timeout of 4 tries with a 10ms delay between checks. +# If no data is received in 40ms the value will be set to None +# Each request can be fine tuned by setting the time param. + +# To find and set timeout of cached data to 200ms: +altitude = aq.find("PLANE_ALTITUDE") +altitude.time = 200 + +# Get the aircraft's current altitude +altitude = aq.get("PLANE_ALTITUDE") +altitude = altitude + 1000 + +# Set the aircraft's current altitude +aq.set("PLANE_ALTITUDE", altitude) + +ae = AircraftEvents(sm) +# Trigger a simple event +event_to_trigger = ae.find("AP_MASTER") # Toggles autopilot on or off +event_to_trigger() + +# Trigger an event while passing a variable +target_altitude = 15000 +event_to_trigger = ae.find("AP_ALT_VAR_SET_ENGLISH") # Sets AP autopilot hold level +event_to_trigger(target_altitude) +sm.exit() +quit() +``` + +## HTTP interface example + +Run `glass_server.py` using Python 3. + +#### `http://localhost:5000` +Method: GET + +Variables: None + +Output: Web interface with moving map and aircraft information + +#### `http://localhost:5000/dataset/` +Method: GET + +Arguments to pass: + +|Argument|Location|Description| +|---|---|---| +|dataset_name|in path|can be navigation, airspeed compass, vertical_speed, fuel, flaps, throttle, gear, trim, autopilot, cabin| + +Description: Returns set of variables from simulator in JSON format + + +#### `http://localhost:5000/datapoint//get` +Method: GET + +Arguments to pass: + +|Argument|Location|Description| +|---|---|---| +|datapoint_name|in path|any variable name from MS SimConnect documentation| + +Description: Returns individual variable from simulator in JSON format + + +#### `http://localhost:5000/datapoint//set` +Method: POST + +Arguments to pass: + +|Argument|Location|Description| +|---|---|---| +|datapoint_name|in path|any variable name from MS SimConnect documentation| +|index (optional)|form or json|the relevant index if required (eg engine number) - if not passed defaults to None| +|value_to_use (optional)|value to set variable to - if not passed defaults to 0| + +Description: Sets datapoint in the simulator + +#### `http://localhost:5000/event//trigger` +Method: POST + +Arguments to pass: + +|Argument|Location|Description| +|---|---|---| +|event_name|in path|any event name from MS SimConnect documentation| +|value_to_use (optional)|value to pass to the event| + +Description: Triggers an event in the simulator + +## Running SimConnect on a separate system. + +#### Note: At this time SimConnect can only run on Windows hosts. + +Create a file called SimConnect.cfg in the same folder as your script. +#### Sample SimConnect.cfg: +```ini +; Example SimConnect client configurations +[SimConnect] +Protocol=IPv4 +Address= +Port=500 +``` +To enable the host running the sim to share over network, + +add \0.0.0.0\ + +under the \500\ in SimConnect.xml + +SimConnect.xml can be located at +#### `%AppData%\Microsoft Flight Simulator\SimConnect.xml` + +#### Sample SimConnect.xml: +```xml + + + + SimConnect Server Configuration + SimConnect.xml + + Static IP4 port + IPv4 + local + 500 +
0.0.0.0
+ 64 + 41088 +
+... +``` +## Notes: + +Python 64-bit is needed. You may see this Error if running 32-bit python: + +```OSError: [WinError 193] %1 is not a valid Win32 application``` + +Per mracko on COM_RADIO_SET: + + MSFS uses the European COM frequency spacing of 8.33kHz for all default aircraft. + This means that in practice, you increment the frequency by 0.005 MHz and + skip x.x20, x.x45, x.x70, and x.x95 MHz frequencies. + Have a look here http://g3asr.co.uk/calculators/833kHz.htm + + +## Events and Variables + +Below are links to the Microsoft documentation + +[Function](https://docs.microsoft.com/en-us/previous-versions/microsoft-esp/cc526983(v=msdn.10)) + +[Event IDs](https://docs.microsoft.com/en-us/previous-versions/microsoft-esp/cc526980(v=msdn.10)) + +[Simulation Variables](https://docs.flightsimulator.com/html/Programming_Tools/SimVars/Simulation_Variables.htm) diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/RECORD b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/RECORD new file mode 100644 index 00000000..a570e84e --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/RECORD @@ -0,0 +1,24 @@ +SimConnect-0.4.26.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +SimConnect-0.4.26.dist-info/LICENSE,sha256=hIahDEOTzuHCU5J2nd07LWwkLW7Hko4UFO__ffsvB-8,34523 +SimConnect-0.4.26.dist-info/METADATA,sha256=4xaiGo98iUSJapRYiuSYf_GbMmFEiHLJcrtRVv_ZqjY,7366 +SimConnect-0.4.26.dist-info/RECORD,, +SimConnect-0.4.26.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0 +SimConnect-0.4.26.dist-info/WHEEL,sha256=z9j0xAa_JmUKMpmz72K0ZGALSM_n-wQVmGbleXx2VHg,110 +SimConnect-0.4.26.dist-info/top_level.txt,sha256=-l3sVHmM-c-3kET7CmFRzkEIF3FrMlqfpaZrL6q-7Fk,11 +SimConnect/Attributes.py,sha256=cW12e7Pz1d2U5gRFq07kaA5i7R3zPPMwWze8xUlVvIc,30412 +SimConnect/Constants.py,sha256=Orymz2RiBdoxI6_cOtGpvNRob0g48KvGdfO4BrrYO1k,2439 +SimConnect/Enum.py,sha256=gSl5ygnkmpE_CDy_fE-nQRpxlBxU-JjkN9NNC5k30bU,23099 +SimConnect/EventList.py,sha256=7bHLQWhFAzBLT20YUCgE2VGy-oNjOeNbNaTOldaA3oI,90310 +SimConnect/FacilitiesList.py,sha256=WG0PTwpT4MJlzUG6puhIppIeCsZXw8laJYUgTlG4aJQ,3597 +SimConnect/RequestList.py,sha256=2babscxMzIqM71NmFxWkSWuHBSXO2r_tLw6zxZAS1c8,107785 +SimConnect/SimConnect.dll,sha256=Z3_ThJ1U4qaDHcGqNI2XTq0iVexgn79YMO8yvwgEuLw,58368 +SimConnect/SimConnect.py,sha256=aTMpln3ynnNBY0w_FO79XU22aJ7zX9y6KZ4pIMe53bc,13354 +SimConnect/__init__.py,sha256=O3zs8qIMzt--vRldfa3DvdmZtoYZN9_EtR6MFe0aznU,496 +SimConnect/__pycache__/Attributes.cpython-311.pyc,, +SimConnect/__pycache__/Constants.cpython-311.pyc,, +SimConnect/__pycache__/Enum.cpython-311.pyc,, +SimConnect/__pycache__/EventList.cpython-311.pyc,, +SimConnect/__pycache__/FacilitiesList.cpython-311.pyc,, +SimConnect/__pycache__/RequestList.cpython-311.pyc,, +SimConnect/__pycache__/SimConnect.cpython-311.pyc,, +SimConnect/__pycache__/__init__.cpython-311.pyc,, diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/REQUESTED b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/REQUESTED new file mode 100644 index 00000000..e69de29b diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/WHEEL b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/WHEEL new file mode 100644 index 00000000..0b18a281 --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/WHEEL @@ -0,0 +1,6 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.37.1) +Root-Is-Purelib: true +Tag: py2-none-any +Tag: py3-none-any + diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/top_level.txt b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/top_level.txt new file mode 100644 index 00000000..db7f4420 --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect-0.4.26.dist-info/top_level.txt @@ -0,0 +1 @@ +SimConnect diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect/Attributes.py b/templates/skills/msfs2020_control/dependencies/SimConnect/Attributes.py new file mode 100644 index 00000000..4e084aff --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect/Attributes.py @@ -0,0 +1,984 @@ +from .Enum import * +from .Constants import * +from ctypes import * +from ctypes.wintypes import * + + +class SimConnectDll(object): + + def __init__(self, library_path): + self.EventID = SIMCONNECT_CLIENT_EVENT_ID + self.DATA_DEFINITION_ID = SIMCONNECT_DATA_DEFINITION_ID + self.DATA_REQUEST_ID = SIMCONNECT_DATA_REQUEST_ID + self.GROUP_ID = SIMCONNECT_NOTIFICATION_GROUP_ID + self.INPUT_GROUP_ID = SIMCONNECT_INPUT_GROUP_ID + self.CLIENT_DATA_ID = SIMCONNECT_CLIENT_DATA_ID + self.CLIENT_DATA_DEFINITION_ID = SIMCONNECT_CLIENT_DATA_DEFINITION_ID + + self.SimConnect = windll.LoadLibrary(library_path) + # SIMCONNECTAPI SimConnect_Open( + # HANDLE * phSimConnect, + # LPCSTR szName, + # HWND hWnd, + # DWORD UserEventWin32, + # HANDLE hEventHandle, + # DWORD ConfigIndex) + + self.Open = self.SimConnect.SimConnect_Open + self.Open.restype = HRESULT + self.Open.argtypes = [POINTER(HANDLE), LPCSTR, HWND, DWORD, HANDLE, DWORD] + + # SIMCONNECTAPI SimConnect_Close( + # HANDLE hSimConnect); + + self.Close = self.SimConnect.SimConnect_Close + self.Close.restype = HRESULT + self.Close.argtypes = [HANDLE] + + # SIMCONNECTAPI SimConnect_AddToDataDefinition( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_DEFINITION_ID DefineID, + # const char * DatumName, + # const char * UnitsName, + # SIMCONNECT_DATATYPE DatumType = SIMCONNECT_DATATYPE_FLOAT64, + # float fEpsilon = 0, + # DWORD DatumID = SIMCONNECT_UNUSED); + + self.AddToDataDefinition = self.SimConnect.SimConnect_AddToDataDefinition + self.AddToDataDefinition.restype = HRESULT + self.AddToDataDefinition.argtypes = [ + HANDLE, + self.DATA_DEFINITION_ID, + c_char_p, + c_char_p, + SIMCONNECT_DATATYPE, + c_float, + DWORD, + ] + + # SIMCONNECTAPI SimConnect_SubscribeToSystemEvent( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_EVENT_ID EventID, + # const char * SystemEventName); + + self.SubscribeToSystemEvent = ( + self.SimConnect.SimConnect_SubscribeToSystemEvent + ) + self.SubscribeToSystemEvent.restype = HRESULT + self.SubscribeToSystemEvent.argtypes = [HANDLE, self.EventID, c_char_p] + + # SIMCONNECTAPI SimConnect_CallDispatch( + # HANDLE hSimConnect, + # DispatchProc pfcnDispatch, + # void * pContext); + + self.DispatchProc = WINFUNCTYPE(None, POINTER(SIMCONNECT_RECV), DWORD, c_void_p) + + self.CallDispatch = self.SimConnect.SimConnect_CallDispatch + self.CallDispatch.restype = HRESULT + self.CallDispatch.argtypes = [HANDLE, self.DispatchProc, c_void_p] + + # SIMCONNECTAPI SimConnect_RequestDataOnSimObjectType( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_REQUEST_ID RequestID, + # SIMCONNECT_DATA_DEFINITION_ID DefineID, + # DWORD dwRadiusMeters, + # SIMCONNECT_SIMOBJECT_TYPE type); + + self.RequestDataOnSimObjectType = ( + self.SimConnect.SimConnect_RequestDataOnSimObjectType + ) + self.RequestDataOnSimObjectType.restype = HRESULT + self.RequestDataOnSimObjectType.argtypes = [ + HANDLE, + self.DATA_REQUEST_ID, + self.DATA_DEFINITION_ID, + DWORD, + SIMCONNECT_SIMOBJECT_TYPE, + ] + + # SIMCONNECTAPI SimConnect_TransmitClientEvent( + # HANDLE hSimConnect, + # SIMCONNECT_OBJECT_ID ObjectID, + # SIMCONNECT_CLIENT_EVENT_ID EventID, + # DWORD dwData, + # SIMCONNECT_NOTIFICATION_GROUP_ID GroupID, + # SIMCONNECT_EVENT_FLAG Flags); + + self.TransmitClientEvent = self.SimConnect.SimConnect_TransmitClientEvent + self.TransmitClientEvent.restype = HRESULT + self.TransmitClientEvent.argtypes = [ + HANDLE, + SIMCONNECT_OBJECT_ID, + self.EventID, + DWORD, + DWORD, + DWORD, + ] + + # SIMCONNECTAPI SimConnect_MapClientEventToSimEvent( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_EVENT_ID EventID, + # const char * EventName = ""); + + self.MapClientEventToSimEvent = ( + self.SimConnect.SimConnect_MapClientEventToSimEvent + ) + self.MapClientEventToSimEvent.restype = HRESULT + self.MapClientEventToSimEvent.argtypes = [HANDLE, self.EventID, c_char_p] + + # SIMCONNECTAPI SimConnect_AddClientEventToNotificationGroup( + # HANDLE hSimConnect, + # SIMCONNECT_NOTIFICATION_GROUP_ID GroupID, + # SIMCONNECT_CLIENT_EVENT_ID EventID, + # BOOL bMaskable = FALSE); + + self.AddClientEventToNotificationGroup = ( + self.SimConnect.SimConnect_AddClientEventToNotificationGroup + ) + self.AddClientEventToNotificationGroup.restype = HRESULT + self.AddClientEventToNotificationGroup.argtypes = [ + HANDLE, + self.GROUP_ID, + self.EventID, + c_bool, + ] + + # SIMCONNECTAPI SimConnect_SetSystemEventState( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_EVENT_ID EventID + # SIMCONNECT_STATE dwState); + self.SetSystemEventState = self.SimConnect.SimConnect_SetSystemEventState + self.SetSystemEventState.restype = HRESULT + self.SetSystemEventState.argtypes = [HANDLE, self.EventID, SIMCONNECT_STATE] + + # SIMCONNECTAPI SimConnect_AddClientEventToNotificationGroup( + # HANDLE hSimConnect, + # SIMCONNECT_NOTIFICATION_GROUP_ID GroupID + # SIMCONNECT_CLIENT_EVENT_ID EventID + # BOOL bMaskable = FALSE); + self.AddClientEventToNotificationGroup = ( + self.SimConnect.SimConnect_AddClientEventToNotificationGroup + ) + self.AddClientEventToNotificationGroup.restype = HRESULT + self.AddClientEventToNotificationGroup.argtypes = [ + HANDLE, + self.GROUP_ID, + self.EventID, + c_bool, + ] + + # SIMCONNECTAPI SimConnect_RemoveClientEvent( + # HANDLE hSimConnect, + # SIMCONNECT_NOTIFICATION_GROUP_ID GroupID + # SIMCONNECT_CLIENT_EVENT_ID EventID); + self.RemoveClientEvent = self.SimConnect.SimConnect_RemoveClientEvent + self.RemoveClientEvent.restype = HRESULT + self.RemoveClientEvent.argtypes = [HANDLE, self.GROUP_ID, self.EventID] + + # SIMCONNECTAPI SimConnect_SetNotificationGroupPriority( + # HANDLE hSimConnect, + # SIMCONNECT_NOTIFICATION_GROUP_ID GroupID + # DWORD uPriority); + self.SetNotificationGroupPriority = ( + self.SimConnect.SimConnect_SetNotificationGroupPriority + ) + self.SetNotificationGroupPriority.restype = HRESULT + self.SetNotificationGroupPriority.argtypes = [HANDLE, self.GROUP_ID, DWORD] + + # SIMCONNECTAPI SimConnect_ClearNotificationGroup( + # HANDLE hSimConnect, + # SIMCONNECT_NOTIFICATION_GROUP_ID GroupID); + self.ClearNotificationGroup = ( + self.SimConnect.SimConnect_ClearNotificationGroup + ) + self.ClearNotificationGroup.restype = HRESULT + self.ClearNotificationGroup.argtypes = [HANDLE, self.GROUP_ID] + + # SIMCONNECTAPI SimConnect_RequestNotificationGroup( + # HANDLE hSimConnect, + # SIMCONNECT_NOTIFICATION_GROUP_ID GroupID + # DWORD dwReserved = 0 + # DWORD Flags = 0); + self.RequestNotificationGroup = ( + self.SimConnect.SimConnect_RequestNotificationGroup + ) + self.RequestNotificationGroup.restype = HRESULT + self.RequestNotificationGroup.argtypes = [HANDLE, self.GROUP_ID, DWORD, DWORD] + + # SIMCONNECTAPI SimConnect_ClearDataDefinition( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_DEFINITION_ID DefineID); + self.ClearDataDefinition = self.SimConnect.SimConnect_ClearDataDefinition + self.ClearDataDefinition.restype = HRESULT + self.ClearDataDefinition.argtypes = [HANDLE, self.DATA_DEFINITION_ID] + + # SIMCONNECTAPI SimConnect_RequestDataOnSimObject( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_REQUEST_ID RequestID + # SIMCONNECT_DATA_DEFINITION_ID DefineID + # SIMCONNECT_OBJECT_ID ObjectID + # SIMCONNECT_PERIOD Period + # SIMCONNECT_DATA_REQUEST_FLAG Flags = 0 + # DWORD origin = 0 + # DWORD interval = 0 + # DWORD limit = 0); + self.RequestDataOnSimObject = ( + self.SimConnect.SimConnect_RequestDataOnSimObject + ) + self.RequestDataOnSimObject.restype = HRESULT + self.RequestDataOnSimObject.argtypes = [ + HANDLE, + self.DATA_REQUEST_ID, + self.DATA_DEFINITION_ID, + SIMCONNECT_OBJECT_ID, + SIMCONNECT_PERIOD, + SIMCONNECT_DATA_REQUEST_FLAG, + DWORD, + DWORD, + DWORD, + ] + + # SIMCONNECTAPI SimConnect_SetDataOnSimObject( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_DEFINITION_ID DefineID + # SIMCONNECT_OBJECT_ID ObjectID + # SIMCONNECT_DATA_SET_FLAG Flags + # DWORD ArrayCount + # DWORD cbUnitSize + # void * pDataSet); + self.SetDataOnSimObject = self.SimConnect.SimConnect_SetDataOnSimObject + self.SetDataOnSimObject.restype = HRESULT + self.SetDataOnSimObject.argtypes = [ + HANDLE, + self.DATA_DEFINITION_ID, + SIMCONNECT_OBJECT_ID, + SIMCONNECT_DATA_SET_FLAG, + DWORD, + DWORD, + c_void_p, + ] + + # SIMCONNECTAPI SimConnect_MapInputEventToClientEvent( + # HANDLE hSimConnect, + # SIMCONNECT_INPUT_GROUP_ID GroupID + # const char * szInputDefinition + # SIMCONNECT_CLIENT_EVENT_ID DownEventID + # DWORD DownValue = 0 + # SIMCONNECT_CLIENT_EVENT_ID UpEventID = (SIMCONNECT_CLIENT_EVENT_ID)SIMCONNECT_UNUSED + # DWORD UpValue = 0 + # BOOL bMaskable = FALSE); + self.MapInputEventToClientEvent = ( + self.SimConnect.SimConnect_MapInputEventToClientEvent + ) + self.MapInputEventToClientEvent.restype = HRESULT + self.MapInputEventToClientEvent.argtypes = [ + HANDLE, + self.INPUT_GROUP_ID, + c_char_p, + self.EventID, + DWORD, + self.EventID, + DWORD, + c_bool, + ] + + # SIMCONNECTAPI SimConnect_SetInputGroupPriority( + # HANDLE hSimConnect, + # SIMCONNECT_INPUT_GROUP_ID GroupID + # DWORD uPriority); + self.SetInputGroupPriority = self.SimConnect.SimConnect_SetInputGroupPriority + self.SetInputGroupPriority.restype = HRESULT + self.SetInputGroupPriority.argtypes = [HANDLE, self.INPUT_GROUP_ID, DWORD] + + # SIMCONNECTAPI SimConnect_RemoveInputEvent( + # HANDLE hSimConnect, + # SIMCONNECT_INPUT_GROUP_ID GroupID + # const char * szInputDefinition); + self.RemoveInputEvent = self.SimConnect.SimConnect_RemoveInputEvent + self.RemoveInputEvent.restype = HRESULT + self.RemoveInputEvent.argtypes = [HANDLE, self.INPUT_GROUP_ID, c_char_p] + + # SIMCONNECTAPI SimConnect_ClearInputGroup( + # HANDLE hSimConnect, + # SIMCONNECT_INPUT_GROUP_ID GroupID); + self.ClearInputGroup = self.SimConnect.SimConnect_ClearInputGroup + self.ClearInputGroup.restype = HRESULT + self.ClearInputGroup.argtypes = [HANDLE, self.INPUT_GROUP_ID] + + # SIMCONNECTAPI SimConnect_SetInputGroupState( + # HANDLE hSimConnect, + # SIMCONNECT_INPUT_GROUP_ID GroupID + # DWORD dwState); + self.SetInputGroupState = self.SimConnect.SimConnect_SetInputGroupState + self.SetInputGroupState.restype = HRESULT + self.SetInputGroupState.argtypes = [HANDLE, self.INPUT_GROUP_ID, DWORD] + + # SIMCONNECTAPI SimConnect_RequestReservedKey( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_EVENT_ID EventID + # const char * szKeyChoice1 = "" + # const char * szKeyChoice2 = "" + # const char * szKeyChoice3 = ""); + self.RequestReservedKey = self.SimConnect.SimConnect_RequestReservedKey + self.RequestReservedKey.restype = HRESULT + self.RequestReservedKey.argtypes = [ + HANDLE, + self.EventID, + c_char_p, + c_char_p, + c_char_p, + ] + + # SIMCONNECTAPI SimConnect_UnsubscribeFromSystemEvent( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_EVENT_ID EventID); + self.UnsubscribeFromSystemEvent = ( + self.SimConnect.SimConnect_UnsubscribeFromSystemEvent + ) + self.UnsubscribeFromSystemEvent.restype = HRESULT + self.UnsubscribeFromSystemEvent.argtypes = [HANDLE, self.EventID] + + # SIMCONNECTAPI SimConnect_WeatherRequestInterpolatedObservation( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_REQUEST_ID RequestID + # float lat + # float lon + # float alt); + self.WeatherRequestInterpolatedObservation = ( + self.SimConnect.SimConnect_WeatherRequestInterpolatedObservation + ) + self.WeatherRequestInterpolatedObservation.restype = HRESULT + self.WeatherRequestInterpolatedObservation.argtypes = [ + HANDLE, + self.DATA_REQUEST_ID, + c_float, + c_float, + c_float, + ] + + # SIMCONNECTAPI SimConnect_WeatherRequestObservationAtStation( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_REQUEST_ID RequestID + # const char * szICAO); + self.WeatherRequestObservationAtStation = ( + self.SimConnect.SimConnect_WeatherRequestObservationAtStation + ) + self.WeatherRequestObservationAtStation.restype = HRESULT + self.WeatherRequestObservationAtStation.argtypes = [ + HANDLE, + self.DATA_REQUEST_ID, + c_char_p, + ] + + # SIMCONNECTAPI SimConnect_WeatherRequestObservationAtNearestStation( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_REQUEST_ID RequestID + # float lat + # float lon); + self.WeatherRequestObservationAtNearestStation = ( + self.SimConnect.SimConnect_WeatherRequestObservationAtNearestStation + ) + self.WeatherRequestObservationAtNearestStation.restype = HRESULT + self.WeatherRequestObservationAtNearestStation.argtypes = [ + HANDLE, + self.DATA_REQUEST_ID, + c_float, + c_float, + ] + + # SIMCONNECTAPI SimConnect_WeatherCreateStation( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_REQUEST_ID RequestID + # const char * szICAO + # const char * szName + # float lat + # float lon + # float alt); + self.WeatherCreateStation = self.SimConnect.SimConnect_WeatherCreateStation + self.WeatherCreateStation.restype = HRESULT + self.WeatherCreateStation.argtypes = [ + HANDLE, + self.DATA_REQUEST_ID, + c_char_p, + c_char_p, + c_float, + c_float, + c_float, + ] + + # SIMCONNECTAPI SimConnect_WeatherRemoveStation( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_REQUEST_ID RequestID + # const char * szICAO); + self.WeatherRemoveStation = self.SimConnect.SimConnect_WeatherRemoveStation + self.WeatherRemoveStation.restype = HRESULT + self.WeatherRemoveStation.argtypes = [HANDLE, self.DATA_REQUEST_ID, c_char_p] + + # SIMCONNECTAPI SimConnect_WeatherSetObservation( + # HANDLE hSimConnect, + # DWORD Seconds + # const char * szMETAR); + self.WeatherSetObservation = self.SimConnect.SimConnect_WeatherSetObservation + self.WeatherSetObservation.restype = HRESULT + self.WeatherSetObservation.argtypes = [HANDLE, DWORD, c_char_p] + + # SIMCONNECTAPI SimConnect_WeatherSetModeServer( + # HANDLE hSimConnect, + # DWORD dwPort + # DWORD dwSeconds); + self.WeatherSetModeServer = self.SimConnect.SimConnect_WeatherSetModeServer + self.WeatherSetModeServer.restype = HRESULT + self.WeatherSetModeServer.argtypes = [HANDLE, DWORD, DWORD] + + # SIMCONNECTAPI SimConnect_WeatherSetModeTheme( + # HANDLE hSimConnect, + # const char * szThemeName); + self.WeatherSetModeTheme = self.SimConnect.SimConnect_WeatherSetModeTheme + self.WeatherSetModeTheme.restype = HRESULT + self.WeatherSetModeTheme.argtypes = [HANDLE, c_char_p] + + # SIMCONNECTAPI SimConnect_WeatherSetModeGlobal( + # HANDLE hSimConnect); + self.WeatherSetModeGlobal = self.SimConnect.SimConnect_WeatherSetModeGlobal + self.WeatherSetModeGlobal.restype = HRESULT + self.WeatherSetModeGlobal.argtypes = [HANDLE] + + # SIMCONNECTAPI SimConnect_WeatherSetModeCustom( + # HANDLE hSimConnect); + self.WeatherSetModeCustom = self.SimConnect.SimConnect_WeatherSetModeCustom + self.WeatherSetModeCustom.restype = HRESULT + self.WeatherSetModeCustom.argtypes = [HANDLE] + + # SIMCONNECTAPI SimConnect_WeatherSetDynamicUpdateRate( + # HANDLE hSimConnect, + # DWORD dwRate); + self.WeatherSetDynamicUpdateRate = ( + self.SimConnect.SimConnect_WeatherSetDynamicUpdateRate + ) + self.WeatherSetDynamicUpdateRate.restype = HRESULT + self.WeatherSetDynamicUpdateRate.argtypes = [HANDLE, DWORD] + + # SIMCONNECTAPI SimConnect_WeatherRequestCloudState( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_REQUEST_ID RequestID + # float minLat + # float minLon + # float minAlt + # float maxLat + # float maxLon + # float maxAlt + # DWORD dwFlags = 0); + self.WeatherRequestCloudState = ( + self.SimConnect.SimConnect_WeatherRequestCloudState + ) + self.WeatherRequestCloudState.restype = HRESULT + self.WeatherRequestCloudState.argtypes = [ + HANDLE, + self.DATA_REQUEST_ID, + c_float, + c_float, + c_float, + c_float, + c_float, + c_float, + DWORD, + ] + + # SIMCONNECTAPI SimConnect_WeatherCreateThermal( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_REQUEST_ID RequestID + # float lat + # float lon + # float alt + # float radius + # float height + # float coreRate = 3.0f + # float coreTurbulence = 0.05f + # float sinkRate = 3.0f + # float sinkTurbulence = 0.2f + # float coreSize = 0.4f + # float coreTransitionSize = 0.1f + # float sinkLayerSize = 0.4f + # float sinkTransitionSize = 0.1f); + self.WeatherCreateThermal = self.SimConnect.SimConnect_WeatherCreateThermal + self.WeatherCreateThermal.restype = HRESULT + self.WeatherCreateThermal.argtypes = [ + HANDLE, + self.DATA_REQUEST_ID, + c_float, + c_float, + c_float, + c_float, + c_float, + c_float, + c_float, + c_float, + c_float, + c_float, + c_float, + c_float, + c_float, + ] + + # SIMCONNECTAPI SimConnect_WeatherRemoveThermal( + # HANDLE hSimConnect, + # SIMCONNECT_OBJECT_ID ObjectID); + self.WeatherRemoveThermal = self.SimConnect.SimConnect_WeatherRemoveThermal + self.WeatherRemoveThermal.restype = HRESULT + self.WeatherRemoveThermal.argtypes = [HANDLE, SIMCONNECT_OBJECT_ID] + + # SIMCONNECTAPI SimConnect_AICreateParkedATCAircraft( + # HANDLE hSimConnect, + # const char * szContainerTitle + # const char * szTailNumber + # const char * szAirportID + # SIMCONNECT_DATA_REQUEST_ID RequestID); + self.AICreateParkedATCAircraft = ( + self.SimConnect.SimConnect_AICreateParkedATCAircraft + ) + self.AICreateParkedATCAircraft.restype = HRESULT + self.AICreateParkedATCAircraft.argtypes = [ + HANDLE, + c_char_p, + c_char_p, + c_char_p, + self.DATA_REQUEST_ID, + ] + + # SIMCONNECTAPI SimConnect_AICreateEnrouteATCAircraft( + # HANDLE hSimConnect, + # const char * szContainerTitle + # const char * szTailNumber + # int iFlightNumber + # const char * szFlightPlanPath + # double dFlightPlanPosition + # BOOL bTouchAndGo + # SIMCONNECT_DATA_REQUEST_ID RequestID); + self.AICreateEnrouteATCAircraft = ( + self.SimConnect.SimConnect_AICreateEnrouteATCAircraft + ) + self.AICreateEnrouteATCAircraft.restype = HRESULT + self.AICreateEnrouteATCAircraft.argtypes = [ + HANDLE, + c_char_p, + c_char_p, + c_int, + c_char_p, + c_double, + c_bool, + self.DATA_REQUEST_ID, + ] + + # SIMCONNECTAPI SimConnect_AICreateNonATCAircraft( + # HANDLE hSimConnect, + # const char * szContainerTitle + # const char * szTailNumber + # SIMCONNECT_DATA_INITPOSITION InitPos + # SIMCONNECT_DATA_REQUEST_ID RequestID); + self.AICreateNonATCAircraft = ( + self.SimConnect.SimConnect_AICreateNonATCAircraft + ) + self.AICreateNonATCAircraft.restype = HRESULT + self.AICreateNonATCAircraft.argtypes = [ + HANDLE, + c_char_p, + c_char_p, + SIMCONNECT_DATA_INITPOSITION, + self.DATA_REQUEST_ID, + ] + + # SIMCONNECTAPI SimConnect_AICreateSimulatedObject( + # HANDLE hSimConnect, + # const char * szContainerTitle + # SIMCONNECT_DATA_INITPOSITION InitPos + # SIMCONNECT_DATA_REQUEST_ID RequestID); + self.AICreateSimulatedObject = ( + self.SimConnect.SimConnect_AICreateSimulatedObject + ) + self.AICreateSimulatedObject.restype = HRESULT + self.AICreateSimulatedObject.argtypes = [ + HANDLE, + c_char_p, + SIMCONNECT_DATA_INITPOSITION, + self.DATA_REQUEST_ID, + ] + + # SIMCONNECTAPI SimConnect_AIReleaseControl( + # HANDLE hSimConnect, + # SIMCONNECT_OBJECT_ID ObjectID + # SIMCONNECT_DATA_REQUEST_ID RequestID); + self.AIReleaseControl = self.SimConnect.SimConnect_AIReleaseControl + self.AIReleaseControl.restype = HRESULT + self.AIReleaseControl.argtypes = [ + HANDLE, + SIMCONNECT_OBJECT_ID, + self.DATA_REQUEST_ID, + ] + + # SIMCONNECTAPI SimConnect_AIRemoveObject( + # HANDLE hSimConnect, + # SIMCONNECT_OBJECT_ID ObjectID + # SIMCONNECT_DATA_REQUEST_ID RequestID); + self.AIRemoveObject = self.SimConnect.SimConnect_AIRemoveObject + self.AIRemoveObject.restype = HRESULT + self.AIRemoveObject.argtypes = [ + HANDLE, + SIMCONNECT_OBJECT_ID, + self.DATA_REQUEST_ID, + ] + + # SIMCONNECTAPI SimConnect_AISetAircraftFlightPlan( + # HANDLE hSimConnect, + # SIMCONNECT_OBJECT_ID ObjectID + # const char * szFlightPlanPath + # SIMCONNECT_DATA_REQUEST_ID RequestID); + self.AISetAircraftFlightPlan = ( + self.SimConnect.SimConnect_AISetAircraftFlightPlan + ) + self.AISetAircraftFlightPlan.restype = HRESULT + self.AISetAircraftFlightPlan.argtypes = [ + HANDLE, + SIMCONNECT_OBJECT_ID, + c_char_p, + self.DATA_REQUEST_ID, + ] + + # SIMCONNECTAPI SimConnect_ExecuteMissionAction( + # HANDLE hSimConnect, + # const GUID guidInstanceId); + self.ExecuteMissionAction = self.SimConnect.SimConnect_ExecuteMissionAction + self.ExecuteMissionAction.restype = HRESULT + self.ExecuteMissionAction.argtypes = [] + + # SIMCONNECTAPI SimConnect_CompleteCustomMissionAction( + # HANDLE hSimConnect, + # const GUID guidInstanceId); + self.CompleteCustomMissionAction = ( + self.SimConnect.SimConnect_CompleteCustomMissionAction + ) + self.CompleteCustomMissionAction.restype = HRESULT + self.CompleteCustomMissionAction.argtypes = [] + + # SIMCONNECTAPI SimConnect_RetrieveString( + # SIMCONNECT_RECV * pData, + # DWORD cbData + # void * pStringV + # char ** pszString + # DWORD * pcbString); + self.RetrieveString = self.SimConnect.SimConnect_RetrieveString + self.RetrieveString.restype = HRESULT + self.RetrieveString.argtypes = [] + + # SIMCONNECTAPI SimConnect_GetLastSentPacketID( + # HANDLE hSimConnect, + # DWORD * pdwError); + self.GetLastSentPacketID = self.SimConnect.SimConnect_GetLastSentPacketID + self.GetLastSentPacketID.restype = HRESULT + self.GetLastSentPacketID.argtypes = [HANDLE, POINTER(DWORD)] + + # SIMCONNECTAPI SimConnect_GetNextDispatch( + # HANDLE hSimConnect, + # SIMCONNECT_RECV ** ppData + # DWORD * pcbData); + self.GetNextDispatch = self.SimConnect.SimConnect_GetNextDispatch + self.GetNextDispatch.restype = HRESULT + self.GetNextDispatch.argtypes = [] + + # SIMCONNECTAPI SimConnect_RequestResponseTimes( + # HANDLE hSimConnect, + # DWORD nCount + # float * fElapsedSeconds); + self.RequestResponseTimes = self.SimConnect.SimConnect_RequestResponseTimes + self.RequestResponseTimes.restype = HRESULT + self.RequestResponseTimes.argtypes = [ + HANDLE, + DWORD, + c_float + ] + + # SIMCONNECTAPI SimConnect_InsertString( + # char * pDest, + # DWORD cbDest + # void ** ppEnd + # DWORD * pcbStringV + # const char * pSource); + self.InsertString = self.SimConnect.SimConnect_InsertString + self.InsertString.restype = HRESULT + self.InsertString.argtypes = [] + + # SIMCONNECTAPI SimConnect_CameraSetRelative6DOF( + # HANDLE hSimConnect, + # float fDeltaX + # float fDeltaY + # float fDeltaZ + # float fPitchDeg + # float fBankDeg + # float fHeadingDeg); + self.CameraSetRelative6DOF = self.SimConnect.SimConnect_CameraSetRelative6DOF + self.CameraSetRelative6DOF.restype = HRESULT + self.CameraSetRelative6DOF.argtypes = [ + c_float, + c_float, + c_float, + c_float, + c_float, + c_float + ] + + # SIMCONNECTAPI SimConnect_MenuAddItem( + # HANDLE hSimConnect, + # const char * szMenuItem + # SIMCONNECT_CLIENT_EVENT_ID MenuEventID + # DWORD dwData); + self.MenuAddItem = self.SimConnect.SimConnect_MenuAddItem + self.MenuAddItem.restype = HRESULT + self.MenuAddItem.argtypes = [ + HANDLE, + SIMCONNECT_CLIENT_EVENT_ID, + DWORD + ] + + # SIMCONNECTAPI SimConnect_MenuDeleteItem( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_EVENT_ID MenuEventID); + self.MenuDeleteItem = self.SimConnect.SimConnect_MenuDeleteItem + self.MenuDeleteItem.restype = HRESULT + self.MenuDeleteItem.argtypes = [ + HANDLE, + SIMCONNECT_CLIENT_EVENT_ID + ] + + # SIMCONNECTAPI SimConnect_MenuAddSubItem( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_EVENT_ID MenuEventID + # const char * szMenuItem + # SIMCONNECT_CLIENT_EVENT_ID SubMenuEventID + # DWORD dwData); + self.MenuAddSubItem = self.SimConnect.SimConnect_MenuAddSubItem + self.MenuAddSubItem.restype = HRESULT + self.MenuAddSubItem.argtypes = [ + HANDLE, + SIMCONNECT_CLIENT_EVENT_ID, + c_char_p, + SIMCONNECT_CLIENT_EVENT_ID, + DWORD + ] + + # SIMCONNECTAPI SimConnect_MenuDeleteSubItem( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_EVENT_ID MenuEventID + # const SIMCONNECT_CLIENT_EVENT_ID SubMenuEventID); + self.MenuDeleteSubItem = self.SimConnect.SimConnect_MenuDeleteSubItem + self.MenuDeleteSubItem.restype = HRESULT + self.MenuDeleteSubItem.argtypes = [ + HANDLE, + SIMCONNECT_CLIENT_EVENT_ID, + SIMCONNECT_CLIENT_EVENT_ID + ] + + # SIMCONNECTAPI SimConnect_RequestSystemState( + # HANDLE hSimConnect, + # SIMCONNECT_DATA_REQUEST_ID RequestID + # const char * szState); + self.RequestSystemState = self.SimConnect.SimConnect_RequestSystemState + self.RequestSystemState.restype = HRESULT + self.RequestSystemState.argtypes = [ + HANDLE, + SIMCONNECT_DATA_REQUEST_ID, + c_char_p + ] + + # SIMCONNECTAPI SimConnect_SetSystemState( + # HANDLE hSimConnect, + # const char * szState + # DWORD dwInteger + # float fFloat + # const char * szString); + self.SetSystemState = self.SimConnect.SimConnect_SetSystemState + self.SetSystemState.restype = HRESULT + self.SetSystemState.argtypes = [ + HANDLE, + c_char_p, + DWORD, + c_float, + c_char_p + ] + + # SIMCONNECTAPI SimConnect_MapClientDataNameToID( + # HANDLE hSimConnect, + # const char * szClientDataName + # SIMCONNECT_CLIENT_DATA_ID ClientDataID); + self.MapClientDataNameToID = self.SimConnect.SimConnect_MapClientDataNameToID + self.MapClientDataNameToID.restype = HRESULT + self.MapClientDataNameToID.argtypes = [ + HANDLE, + c_char_p, + SIMCONNECT_CLIENT_DATA_ID + ] + + # SIMCONNECTAPI SimConnect_CreateClientData( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_DATA_ID ClientDataID + # DWORD dwSize + # SIMCONNECT_CREATE_CLIENT_DATA_FLAG Flags); + self.CreateClientData = self.SimConnect.SimConnect_CreateClientData + self.CreateClientData.restype = HRESULT + self.CreateClientData.argtypes = [ + HANDLE, + self.CLIENT_DATA_ID, + DWORD, + SIMCONNECT_CREATE_CLIENT_DATA_FLAG, + ] + + # SIMCONNECTAPI SimConnect_AddToClientDataDefinition( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_DATA_DEFINITION_ID DefineID + # DWORD dwOffset + # DWORD dwSizeOrType + # float fEpsilon = 0 + # DWORD DatumID = SIMCONNECT_UNUSED); + self.AddToClientDataDefinition = self.SimConnect.SimConnect_AddToClientDataDefinition + self.AddToClientDataDefinition.restype = HRESULT + self.AddToClientDataDefinition.argtypes = [ + HANDLE, + self.CLIENT_DATA_DEFINITION_ID, + DWORD, + DWORD, + c_float, + DWORD, + ] + + # SIMCONNECTAPI SimConnect_ClearClientDataDefinition( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_DATA_DEFINITION_ID DefineID); + self.ClearClientDataDefinition = self.SimConnect.SimConnect_ClearClientDataDefinition + self.ClearClientDataDefinition.restype = HRESULT + self.ClearClientDataDefinition.argtypes = [ + HANDLE, + self.CLIENT_DATA_DEFINITION_ID, + ] + + # SIMCONNECTAPI SimConnect_RequestClientData( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_DATA_ID ClientDataID + # SIMCONNECT_DATA_REQUEST_ID RequestID + # SIMCONNECT_CLIENT_DATA_DEFINITION_ID DefineID + # SIMCONNECT_CLIENT_DATA_PERIOD Period = SIMCONNECT_CLIENT_DATA_PERIOD_ONCE + # SIMCONNECT_CLIENT_DATA_REQUEST_FLAG Flags = 0 + # DWORD origin = 0 + # DWORD interval = 0 + # DWORD limit = 0); + self.RequestClientData = self.SimConnect.SimConnect_RequestClientData + self.RequestClientData.restype = HRESULT + self.RequestClientData.argtypes = [ + HANDLE, + self.CLIENT_DATA_ID, + self.DATA_REQUEST_ID, + self.CLIENT_DATA_DEFINITION_ID, + SIMCONNECT_CLIENT_DATA_PERIOD, + SIMCONNECT_CLIENT_DATA_REQUEST_FLAG, + DWORD, + DWORD, + DWORD, + ] + + # SIMCONNECTAPI SimConnect_SetClientData( + # HANDLE hSimConnect, + # SIMCONNECT_CLIENT_DATA_ID ClientDataID + # SIMCONNECT_CLIENT_DATA_DEFINITION_ID DefineID + # SIMCONNECT_CLIENT_DATA_SET_FLAG Flags + # DWORD dwReserved + # DWORD cbUnitSize + # void * pDataSet); + self.SetClientData = self.SimConnect.SimConnect_SetClientData + self.SetClientData.restype = HRESULT + self.SetClientData.argtypes = [ + HANDLE, + self.CLIENT_DATA_ID, + self.CLIENT_DATA_DEFINITION_ID, + SIMCONNECT_CLIENT_DATA_SET_FLAG, + DWORD, + DWORD, + c_void_p, + ] + + # SIMCONNECTAPI SimConnect_FlightLoad( + # HANDLE hSimConnect, + # const char * szFileName); + self.FlightLoad = self.SimConnect.SimConnect_FlightLoad + self.FlightLoad.restype = HRESULT + self.FlightLoad.argtypes = [HANDLE, c_char_p] + + # SIMCONNECTAPI SimConnect_FlightSave( + # HANDLE hSimConnect, + # const char * szFileName + # const char * szTitle + # const char * szDescription + # DWORD Flags); + self.FlightSave = self.SimConnect.SimConnect_FlightSave + self.FlightSave.restype = HRESULT + self.FlightSave.argtypes = [HANDLE, c_char_p, c_char_p, c_char_p, DWORD] + + # SIMCONNECTAPI SimConnect_FlightPlanLoad( + # HANDLE hSimConnect, + # const char * szFileName); + self.FlightPlanLoad = self.SimConnect.SimConnect_FlightPlanLoad + self.FlightPlanLoad.restype = HRESULT + self.FlightPlanLoad.argtypes = [HANDLE, c_char_p] + + # SIMCONNECTAPI SimConnect_Text( + # HANDLE hSimConnect, + # SIMCONNECT_TEXT_TYPE type + # float fTimeSeconds + # SIMCONNECT_CLIENT_EVENT_ID EventID + # DWORD cbUnitSize + # void * pDataSet); + self.Text = self.SimConnect.SimConnect_Text + self.Text.restype = HRESULT + self.Text.argtypes = [ + HANDLE, + SIMCONNECT_TEXT_TYPE, + c_float, + self.EventID, + DWORD, + c_void_p, + ] + + # SIMCONNECTAPI SimConnect_SubscribeToFacilities( + # HANDLE hSimConnect, + # SIMCONNECT_FACILITY_LIST_TYPE type + # SIMCONNECT_DATA_REQUEST_ID RequestID); + self.SubscribeToFacilities = self.SimConnect.SimConnect_SubscribeToFacilities + self.SubscribeToFacilities.restype = HRESULT + self.SubscribeToFacilities.argtypes = [ + HANDLE, + SIMCONNECT_FACILITY_LIST_TYPE, + self.DATA_REQUEST_ID, + ] + + # SIMCONNECTAPI SimConnect_UnsubscribeToFacilities( + # HANDLE hSimConnect, + # SIMCONNECT_FACILITY_LIST_TYPE type); + self.UnsubscribeToFacilities = ( + self.SimConnect.SimConnect_UnsubscribeToFacilities + ) + self.UnsubscribeToFacilities.restype = HRESULT + self.UnsubscribeToFacilities.argtypes = [ + HANDLE, + SIMCONNECT_FACILITY_LIST_TYPE, + ] + + # SIMCONNECTAPI SimConnect_RequestFacilitiesList( + # HANDLE hSimConnect, + # SIMCONNECT_FACILITY_LIST_TYPE type + # SIMCONNECT_DATA_REQUEST_ID RequestID); + self.RequestFacilitiesList = ( + self.SimConnect.SimConnect_RequestFacilitiesList + ) + self.RequestFacilitiesList.restype = HRESULT + self.RequestFacilitiesList.argtypes = [ + HANDLE, + SIMCONNECT_FACILITY_LIST_TYPE, + self.DATA_REQUEST_ID, + ] diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect/Constants.py b/templates/skills/msfs2020_control/dependencies/SimConnect/Constants.py new file mode 100644 index 00000000..d41bb1b3 --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect/Constants.py @@ -0,0 +1,65 @@ +import logging +from ctypes.wintypes import * +from ctypes import * + +LOGGER = logging.getLogger(__name__) + + +# //---------------------------------------------------------------------------- +# // Constants +# //---------------------------------------------------------------------------- + +DWORD_MAX = DWORD(0xFFFFFFFF) +SIMCONNECT_UNUSED = DWORD_MAX +SIMCONNECT_OBJECT_ID_USER = DWORD(0) # proxy value for User vehicle ObjectID +SIMCONNECT_UNUSED = DWORD_MAX # special value to indicate unused event, ID + +SIMCONNECT_CAMERA_IGNORE_FIELD = c_float( + -1 +) # Used to tell the Camera API to NOT modify the value in this part of the argument. + +SIMCONNECT_CLIENTDATA_MAX_SIZE = DWORD( + 8192 +) # maximum value for SimConnect_CreateClientData dwSize parameter + + +# Notification Group priority values +SIMCONNECT_GROUP_PRIORITY_HIGHEST = DWORD(1) # highest priority +SIMCONNECT_GROUP_PRIORITY_HIGHEST_MASKABLE = DWORD( + 10000000 +) # highest priority that allows events to be masked +SIMCONNECT_GROUP_PRIORITY_STANDARD = DWORD(1900000000) # standard priority +SIMCONNECT_GROUP_PRIORITY_DEFAULT = DWORD(2000000000) # default priority +SIMCONNECT_GROUP_PRIORITY_LOWEST = DWORD( + 4000000000 +) # priorities lower than this will be ignored + +# Weather observations Metar strings +MAX_METAR_LENGTH = DWORD(2000) + +# Maximum thermal size is 100 km. +MAX_THERMAL_SIZE = c_float(100000) +MAX_THERMAL_RATE = c_float(1000) + +# SIMCONNECT_DATA_INITPOSITION.Airspeed +INITPOSITION_AIRSPEED_CRUISE = DWORD(-1) # aircraft's cruise airspeed +INITPOSITION_AIRSPEED_KEEP = DWORD(-2) # keep current airspeed + +# AddToClientDataDefinition dwSizeOrType parameter type values +SIMCONNECT_CLIENTDATATYPE_INT8 = DWORD(-1) # 8-bit integer number +SIMCONNECT_CLIENTDATATYPE_INT16 = DWORD(-2) # 16-bit integer number +SIMCONNECT_CLIENTDATATYPE_INT32 = DWORD(-3) # 32-bit integer number +SIMCONNECT_CLIENTDATATYPE_INT64 = DWORD(-4) # 64-bit integer number +SIMCONNECT_CLIENTDATATYPE_FLOAT32 = DWORD(-5) # 32-bit floating-point number (float) +SIMCONNECT_CLIENTDATATYPE_FLOAT64 = DWORD(-6) # 64-bit floating-point number (double) + +# AddToClientDataDefinition dwOffset parameter special values +SIMCONNECT_CLIENTDATAOFFSET_AUTO = DWORD( + -1 +) # automatically compute offset of the ClientData variable + +# Open ConfigIndex parameter special value +SIMCONNECT_OPEN_CONFIGINDEX_LOCAL = DWORD( + -1 +) # ignore SimConnect.cfg settings, and force local connection +SIMCONNECT_OBJECT_ID = DWORD diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect/Enum.py b/templates/skills/msfs2020_control/dependencies/SimConnect/Enum.py new file mode 100644 index 00000000..8b42cad9 --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect/Enum.py @@ -0,0 +1,701 @@ +from enum import IntEnum, IntFlag, Enum, auto +from ctypes.wintypes import * +from ctypes import * +from .Constants import * + +import logging + +LOGGER = logging.getLogger(__name__) + +# ---------------------------------------------------------------------------- +# Enum definitions +# ---------------------------------------------------------------------------- + + +# Define the types we need. +class CtypesEnum(IntEnum): + """A ctypes-compatible IntEnum superclass.""" + + @classmethod + def from_param(cls, obj): + return int(obj) + + +# Define the types we need. +class CtypesFlagEnum(IntFlag): + """A ctypes-compatible Enum superclass.""" + + @classmethod + def from_param(cls, obj): + return int(obj) + + +class AutoName(CtypesEnum): + def _generate_next_value_(name, start, count, last_values): + return count + + +# Receive data types +class SIMCONNECT_RECV_ID(CtypesEnum): + SIMCONNECT_RECV_ID_NULL = 0 + SIMCONNECT_RECV_ID_EXCEPTION = 1 + SIMCONNECT_RECV_ID_OPEN = 2 + SIMCONNECT_RECV_ID_QUIT = 3 + SIMCONNECT_RECV_ID_EVENT = 4 + SIMCONNECT_RECV_ID_EVENT_OBJECT_ADDREMOVE = 5 + SIMCONNECT_RECV_ID_EVENT_FILENAME = 6 + SIMCONNECT_RECV_ID_EVENT_FRAME = 7 + SIMCONNECT_RECV_ID_SIMOBJECT_DATA = 8 + SIMCONNECT_RECV_ID_SIMOBJECT_DATA_BYTYPE = 9 + SIMCONNECT_RECV_ID_WEATHER_OBSERVATION = 10 + SIMCONNECT_RECV_ID_CLOUD_STATE = 11 + SIMCONNECT_RECV_ID_ASSIGNED_OBJECT_ID = 12 + SIMCONNECT_RECV_ID_RESERVED_KEY = 13 + SIMCONNECT_RECV_ID_CUSTOM_ACTION = 14 + SIMCONNECT_RECV_ID_SYSTEM_STATE = 15 + SIMCONNECT_RECV_ID_CLIENT_DATA = 16 + SIMCONNECT_RECV_ID_EVENT_WEATHER_MODE = 17 + SIMCONNECT_RECV_ID_AIRPORT_LIST = 18 + SIMCONNECT_RECV_ID_VOR_LIST = 19 + SIMCONNECT_RECV_ID_NDB_LIST = 20 + SIMCONNECT_RECV_ID_WAYPOINT_LIST = 21 + SIMCONNECT_RECV_ID_EVENT_MULTIPLAYER_SERVER_STARTED = 22 + SIMCONNECT_RECV_ID_EVENT_MULTIPLAYER_CLIENT_STARTED = 23 + SIMCONNECT_RECV_ID_EVENT_MULTIPLAYER_SESSION_ENDED = 24 + SIMCONNECT_RECV_ID_EVENT_RACE_END = 25 + SIMCONNECT_RECV_ID_EVENT_RACE_LAP = 26 + + +# Data data types +class SIMCONNECT_DATATYPE(CtypesEnum): + SIMCONNECT_DATATYPE_INVALID = 0 # invalid data type + SIMCONNECT_DATATYPE_INT32 = 1 # 32-bit integer number + SIMCONNECT_DATATYPE_INT64 = 2 # 64-bit integer number + SIMCONNECT_DATATYPE_FLOAT32 = 3 # 32-bit floating-point number (float) + SIMCONNECT_DATATYPE_FLOAT64 = 4 # 64-bit floating-point number (double) + SIMCONNECT_DATATYPE_STRING8 = 5 # 8-byte string + SIMCONNECT_DATATYPE_STRING32 = 6 # 32-byte string + SIMCONNECT_DATATYPE_STRING64 = 7 # 64-byte string + SIMCONNECT_DATATYPE_STRING128 = 8 # 128-byte string + SIMCONNECT_DATATYPE_STRING256 = 9 # 256-byte string + SIMCONNECT_DATATYPE_STRING260 = 10 # 260-byte string + SIMCONNECT_DATATYPE_STRINGV = 11 # variable-length string + + SIMCONNECT_DATATYPE_INITPOSITION = 12 # see SIMCONNECT_DATA_INITPOSITION + SIMCONNECT_DATATYPE_MARKERSTATE = 13 # see SIMCONNECT_DATA_MARKERSTATE + SIMCONNECT_DATATYPE_WAYPOINT = 14 # see SIMCONNECT_DATA_WAYPOINT + SIMCONNECT_DATATYPE_LATLONALT = 15 # see SIMCONNECT_DATA_LATLONALT + SIMCONNECT_DATATYPE_XYZ = 16 # see SIMCONNECT_DATA_XYZ + + SIMCONNECT_DATATYPE_MAX = 17 # enum limit + + +# Exception error types +class SIMCONNECT_EXCEPTION(CtypesEnum): + SIMCONNECT_EXCEPTION_NONE = 0 + + SIMCONNECT_EXCEPTION_ERROR = 1 + SIMCONNECT_EXCEPTION_SIZE_MISMATCH = 2 + SIMCONNECT_EXCEPTION_UNRECOGNIZED_ID = 3 + SIMCONNECT_EXCEPTION_UNOPENED = 4 + SIMCONNECT_EXCEPTION_VERSION_MISMATCH = 5 + SIMCONNECT_EXCEPTION_TOO_MANY_GROUPS = 6 + SIMCONNECT_EXCEPTION_NAME_UNRECOGNIZED = 7 + SIMCONNECT_EXCEPTION_TOO_MANY_EVENT_NAMES = 8 + SIMCONNECT_EXCEPTION_EVENT_ID_DUPLICATE = 9 + SIMCONNECT_EXCEPTION_TOO_MANY_MAPS = 10 + SIMCONNECT_EXCEPTION_TOO_MANY_OBJECTS = 11 + SIMCONNECT_EXCEPTION_TOO_MANY_REQUESTS = 12 + SIMCONNECT_EXCEPTION_WEATHER_INVALID_PORT = 13 + SIMCONNECT_EXCEPTION_WEATHER_INVALID_METAR = 14 + SIMCONNECT_EXCEPTION_WEATHER_UNABLE_TO_GET_OBSERVATION = 15 + SIMCONNECT_EXCEPTION_WEATHER_UNABLE_TO_CREATE_STATION = 16 + SIMCONNECT_EXCEPTION_WEATHER_UNABLE_TO_REMOVE_STATION = 17 + SIMCONNECT_EXCEPTION_INVALID_DATA_TYPE = 18 + SIMCONNECT_EXCEPTION_INVALID_DATA_SIZE = 19 + SIMCONNECT_EXCEPTION_DATA_ERROR = 20 + SIMCONNECT_EXCEPTION_INVALID_ARRAY = 21 + SIMCONNECT_EXCEPTION_CREATE_OBJECT_FAILED = 22 + SIMCONNECT_EXCEPTION_LOAD_FLIGHTPLAN_FAILED = 23 + SIMCONNECT_EXCEPTION_OPERATION_INVALID_FOR_OBJECT_TYPE = 24 + SIMCONNECT_EXCEPTION_ILLEGAL_OPERATION = 24 + SIMCONNECT_EXCEPTION_ALREADY_SUBSCRIBED = 26 + SIMCONNECT_EXCEPTION_INVALID_ENUM = 27 + SIMCONNECT_EXCEPTION_DEFINITION_ERROR = 28 + SIMCONNECT_EXCEPTION_DUPLICATE_ID = 29 + SIMCONNECT_EXCEPTION_DATUM_ID = 30 + SIMCONNECT_EXCEPTION_OUT_OF_BOUNDS = 31 + SIMCONNECT_EXCEPTION_ALREADY_CREATED = 32 + SIMCONNECT_EXCEPTION_OBJECT_OUTSIDE_REALITY_BUBBLE = 33 + SIMCONNECT_EXCEPTION_OBJECT_CONTAINER = 34 + SIMCONNECT_EXCEPTION_OBJECT_AI = 35 + SIMCONNECT_EXCEPTION_OBJECT_ATC = 36 + SIMCONNECT_EXCEPTION_OBJECT_SCHEDULE = 37 + + +# Object types +class SIMCONNECT_SIMOBJECT_TYPE(CtypesEnum): + SIMCONNECT_SIMOBJECT_TYPE_USER = 0 + SIMCONNECT_SIMOBJECT_TYPE_ALL = 1 + SIMCONNECT_SIMOBJECT_TYPE_AIRCRAFT = 2 + SIMCONNECT_SIMOBJECT_TYPE_HELICOPTER = 3 + SIMCONNECT_SIMOBJECT_TYPE_BOAT = 4 + SIMCONNECT_SIMOBJECT_TYPE_GROUND = 5 + + +# EventState values +class SIMCONNECT_STATE(CtypesEnum): + SIMCONNECT_STATE_OFF = 0 + SIMCONNECT_STATE_ON = 1 + + +# Object Data Request Period values +class SIMCONNECT_PERIOD(CtypesEnum): # + SIMCONNECT_PERIOD_NEVER = 0 + SIMCONNECT_PERIOD_ONCE = 1 + SIMCONNECT_PERIOD_VISUAL_FRAME = 2 + SIMCONNECT_PERIOD_SIM_FRAME = 3 + SIMCONNECT_PERIOD_SECOND = 4 + + +class SIMCONNECT_MISSION_END(CtypesEnum): # + SIMCONNECT_MISSION_FAILED = 0 + SIMCONNECT_MISSION_CRASHED = 1 + SIMCONNECT_MISSION_SUCCEEDED = 2 + + +# ClientData Request Period values +class SIMCONNECT_CLIENT_DATA_PERIOD(CtypesEnum): # + SIMCONNECT_CLIENT_DATA_PERIOD_NEVER = 0 + SIMCONNECT_CLIENT_DATA_PERIOD_ONCE = 1 + SIMCONNECT_CLIENT_DATA_PERIOD_VISUAL_FRAME = 2 + SIMCONNECT_CLIENT_DATA_PERIOD_ON_SET = 3 + SIMCONNECT_CLIENT_DATA_PERIOD_SECOND = 4 + + +class SIMCONNECT_TEXT_TYPE(CtypesEnum): # + SIMCONNECT_TEXT_TYPE_SCROLL_BLACK = 0 + SIMCONNECT_TEXT_TYPE_SCROLL_WHITE = 1 + SIMCONNECT_TEXT_TYPE_SCROLL_RED = 2 + SIMCONNECT_TEXT_TYPE_SCROLL_GREEN = 3 + SIMCONNECT_TEXT_TYPE_SCROLL_BLUE = 4 + SIMCONNECT_TEXT_TYPE_SCROLL_YELLOW = 5 + SIMCONNECT_TEXT_TYPE_SCROLL_MAGENTA = 6 + SIMCONNECT_TEXT_TYPE_SCROLL_CYAN = 7 + SIMCONNECT_TEXT_TYPE_PRINT_BLACK = 0x100 + SIMCONNECT_TEXT_TYPE_PRINT_WHITE = 0x101 + SIMCONNECT_TEXT_TYPE_PRINT_RED = 0x102 + SIMCONNECT_TEXT_TYPE_PRINT_GREEN = 0x103 + SIMCONNECT_TEXT_TYPE_PRINT_BLUE = 0x104 + SIMCONNECT_TEXT_TYPE_PRINT_YELLOW = 0x105 + SIMCONNECT_TEXT_TYPE_PRINT_MAGENTA = 0x106 + SIMCONNECT_TEXT_TYPE_PRINT_CYAN = 0x107 + SIMCONNECT_TEXT_TYPE_MENU = 0x0200 + + +class SIMCONNECT_TEXT_RESULT(CtypesEnum): # + SIMCONNECT_TEXT_RESULT_MENU_SELECT_1 = 0 + SIMCONNECT_TEXT_RESULT_MENU_SELECT_2 = 1 + SIMCONNECT_TEXT_RESULT_MENU_SELECT_3 = 2 + SIMCONNECT_TEXT_RESULT_MENU_SELECT_4 = 3 + SIMCONNECT_TEXT_RESULT_MENU_SELECT_5 = 4 + SIMCONNECT_TEXT_RESULT_MENU_SELECT_6 = 5 + SIMCONNECT_TEXT_RESULT_MENU_SELECT_7 = 6 + SIMCONNECT_TEXT_RESULT_MENU_SELECT_8 = 7 + SIMCONNECT_TEXT_RESULT_MENU_SELECT_9 = 8 + SIMCONNECT_TEXT_RESULT_MENU_SELECT_10 = 9 + SIMCONNECT_TEXT_RESULT_DISPLAYED = 0x10000 + SIMCONNECT_TEXT_RESULT_QUEUED = 0x10001 + SIMCONNECT_TEXT_RESULT_REMOVED = 0x1002 + SIMCONNECT_TEXT_RESULT_REPLACED = 0x10003 + SIMCONNECT_TEXT_RESULT_TIMEOUT = 0x10004 + + +class SIMCONNECT_WEATHER_MODE(CtypesEnum): # + SIMCONNECT_WEATHER_MODE_THEME = 0 + SIMCONNECT_WEATHER_MODE_RWW = 1 + SIMCONNECT_WEATHER_MODE_CUSTOM = 2 + SIMCONNECT_WEATHER_MODE_GLOBAL = 3 + + +class SIMCONNECT_FACILITY_LIST_TYPE(CtypesEnum): # + SIMCONNECT_FACILITY_LIST_TYPE_AIRPORT = 0 + SIMCONNECT_FACILITY_LIST_TYPE_WAYPOINT = 1 + SIMCONNECT_FACILITY_LIST_TYPE_NDB = 2 + SIMCONNECT_FACILITY_LIST_TYPE_VOR = 3 + SIMCONNECT_FACILITY_LIST_TYPE_COUNT = 4 # invalid + + +class SIMCONNECT_VOR_FLAGS(CtypesFlagEnum): # flags for SIMCONNECT_RECV_ID_VOR_LIST + SIMCONNECT_RECV_ID_VOR_LIST_HAS_NAV_SIGNAL = 0x00000001 # Has Nav signal + SIMCONNECT_RECV_ID_VOR_LIST_HAS_LOCALIZER = 0x00000002 # Has localizer + SIMCONNECT_RECV_ID_VOR_LIST_HAS_GLIDE_SLOPE = 0x00000004 # Has Nav signal + SIMCONNECT_RECV_ID_VOR_LIST_HAS_DME = 0x00000008 # Station has DME + + +# bits for the Waypoint Flags field: may be combined +class SIMCONNECT_WAYPOINT_FLAGS(CtypesFlagEnum): # + SIMCONNECT_WAYPOINT_NONE = 0x00 # + SIMCONNECT_WAYPOINT_SPEED_REQUESTED = 0x04 # requested speed at waypoint is valid + SIMCONNECT_WAYPOINT_THROTTLE_REQUESTED = 0x08 # request a specific throttle percentage + SIMCONNECT_WAYPOINT_COMPUTE_VERTICAL_SPEED = 0x10 # compute vertical to speed to reach waypoint altitude when crossing the waypoint + SIMCONNECT_WAYPOINT_ALTITUDE_IS_AGL = 0x20 # AltitudeIsAGL + SIMCONNECT_WAYPOINT_ON_GROUND = 0x00100000 # place this waypoint on the ground + SIMCONNECT_WAYPOINT_REVERSE = 0x00200000 # Back up to this waypoint. Only valid on first waypoint + SIMCONNECT_WAYPOINT_WRAP_TO_FIRST = 0x00400000 + + +class SIMCONNECT_EVENT_FLAG(CtypesFlagEnum): # + SIMCONNECT_EVENT_FLAG_DEFAULT = 0x00000000 # + SIMCONNECT_EVENT_FLAG_FAST_REPEAT_TIMER = 0x00000001 # set event repeat timer to simulate fast repeat + SIMCONNECT_EVENT_FLAG_SLOW_REPEAT_TIMER = 0x00000002 # set event repeat timer to simulate slow repeat + SIMCONNECT_EVENT_FLAG_GROUPID_IS_PRIORITY = 0x00000010 # interpret GroupID parameter as priority value + + +class SIMCONNECT_DATA_REQUEST_FLAG(CtypesFlagEnum): # + SIMCONNECT_DATA_REQUEST_FLAG_DEFAULT = 0x00000000 + SIMCONNECT_DATA_REQUEST_FLAG_CHANGED = 0x00000001 # send requested data when value(s) change + SIMCONNECT_DATA_REQUEST_FLAG_TAGGED = 0x00000002 # send requested data in tagged format + + +class SIMCONNECT_DATA_SET_FLAG(CtypesFlagEnum): # + SIMCONNECT_DATA_SET_FLAG_DEFAULT = 0x00000000 + SIMCONNECT_DATA_SET_FLAG_TAGGED = 0x00000001 # data is in tagged format + + +class SIMCONNECT_CREATE_CLIENT_DATA_FLAG(CtypesFlagEnum): # + SIMCONNECT_CREATE_CLIENT_DATA_FLAG_DEFAULT = 0x00000000 # + SIMCONNECT_CREATE_CLIENT_DATA_FLAG_READ_ONLY = 0x00000001 # permit only ClientData creator to write into ClientData + + +class SIMCONNECT_CLIENT_DATA_REQUEST_FLAG(CtypesFlagEnum): # + SIMCONNECT_CLIENT_DATA_REQUEST_FLAG_DEFAULT = 0x00000000 # + SIMCONNECT_CLIENT_DATA_REQUEST_FLAG_CHANGED = 0x00000001 # send requested ClientData when value(s) change + SIMCONNECT_CLIENT_DATA_REQUEST_FLAG_TAGGED = 0x00000002 # send requested ClientData in tagged format + + +class SIMCONNECT_CLIENT_DATA_SET_FLAG(CtypesFlagEnum): # + SIMCONNECT_CLIENT_DATA_SET_FLAG_DEFAULT = 0x00000000 # + SIMCONNECT_CLIENT_DATA_SET_FLAG_TAGGED = 0x00000001 # data is in tagged format + + +class SIMCONNECT_VIEW_SYSTEM_EVENT_DATA(CtypesFlagEnum): # dwData contains these flags for the "View" System Event + SIMCONNECT_VIEW_SYSTEM_EVENT_DATA_COCKPIT_2D = 0x00000001 # 2D Panels in cockpit view + SIMCONNECT_VIEW_SYSTEM_EVENT_DATA_COCKPIT_VIRTUAL = 0x00000002 # Virtual (3D) panels in cockpit view + SIMCONNECT_VIEW_SYSTEM_EVENT_DATA_ORTHOGONAL = 0x00000004 # Orthogonal (Map) view + + +class SIMCONNECT_SOUND_SYSTEM_EVENT_DATA(CtypesFlagEnum): # dwData contains these flags for the "Sound" System Event + SIMCONNECT_SOUND_SYSTEM_EVENT_DATA_MASTER = 0x00000001# Sound Master + + +class SIMCONNECT_PICK_FLAGS(CtypesFlagEnum): + SIMCONNECT_PICK_GROUND = 0x01 # pick ground/ pick result item is ground location + SIMCONNECT_PICK_AI = 0x02 # pick AI / pick result item is AI, (dwSimObjectID is valid) + SIMCONNECT_PICK_SCENERY = 0x04 # pick scenery/ pick result item is scenery object (hSceneryObject is valid) + SIMCONNECT_PICK_ALL = 0x04 | 0x02 | 0x01 # pick all / (not valid on pick result item) + SIMCONNECT_PICK_COORDSASPIXELS = 0x08 # + + +# ---------------------------------------------------------------------------- +# User-defined enums +# ---------------------------------------------------------------------------- +class SIMCONNECT_NOTIFICATION_GROUP_ID( + AutoName +): # client-defined notification group ID + pass + + +class SIMCONNECT_INPUT_GROUP_ID(AutoName): # client-defined input group ID + pass + + +class SIMCONNECT_DATA_DEFINITION_ID(AutoName): # client-defined data definition ID + pass + + +class SIMCONNECT_DATA_REQUEST_ID(AutoName): # client-defined request data ID + pass + + +class SIMCONNECT_CLIENT_EVENT_ID(AutoName): # client-defined client event ID + EVENT_SIM_START = auto() + EVENT_SIM_STOP = auto() + EVENT_SIM_PAUSED = auto() + EVENT_SIM_UNPAUSED = auto() + pass + + +class SIMCONNECT_CLIENT_DATA_ID(AutoName): # client-defined client data ID + pass + + +class SIMCONNECT_CLIENT_DATA_DEFINITION_ID( + AutoName +): # client-defined client data definition ID + pass + + +# ---------------------------------------------------------------------------- +# Struct definitions +# ---------------------------------------------------------------------------- + + +class SIMCONNECT_RECV(Structure): + _fields_ = [("dwSize", DWORD), ("dwVersion", DWORD), ("dwID", DWORD)] + + +class SIMCONNECT_RECV_EXCEPTION( + SIMCONNECT_RECV +): # when dwID == SIMCONNECT_RECV_ID_EXCEPTION + _fields_ = [ + ("dwException", DWORD), # see SIMCONNECT_EXCEPTION + ("UNKNOWN_SENDID", DWORD), # + ("dwSendID", DWORD), # see SimConnect_GetLastSentPacketID + ("UNKNOWN_INDEX", DWORD), # + ("dwIndex", DWORD), # index of parameter that was source of error + ] + + +class SIMCONNECT_RECV_OPEN(SIMCONNECT_RECV): # when dwID == SIMCONNECT_RECV_ID_OPEN + _fields_ = [ + ("szApplicationName", c_char * 256), + ("dwApplicationVersionMajor", DWORD), + ("dwApplicationVersionMinor", DWORD), + ("dwApplicationBuildMajor", DWORD), + ("dwApplicationBuildMinor", DWORD), + ("dwSimConnectVersionMajor", DWORD), + ("dwSimConnectVersionMinor", DWORD), + ("dwSimConnectBuildMajor", DWORD), + ("dwSimConnectBuildMinor", DWORD), + ("dwReserved1", DWORD), + ("dwReserved2", DWORD), + ] + + +class SIMCONNECT_RECV_QUIT(SIMCONNECT_RECV): # when dwID == SIMCONNECT_RECV_ID_QUIT + pass + + +class SIMCONNECT_RECV_EVENT(SIMCONNECT_RECV): # when dwID == SIMCONNECT_RECV_ID_EVENT + UNKNOWN_GROUP = DWORD_MAX + _fields_ = [ + ("uGroupID", DWORD), + ("uEventID", DWORD), + ("dwData", DWORD), # uEventID-dependent context + ] + + +class SIMCONNECT_RECV_EVENT_FILENAME( + SIMCONNECT_RECV_EVENT +): # when dwID == SIMCONNECT_RECV_ID_EVENT_FILENAME + _fields_ = [ + ("zFileName", c_char * MAX_PATH), # uEventID-dependent context + ("dwFlags", DWORD), + ] + + +class SIMCONNECT_RECV_EVENT_OBJECT_ADDREMOVE( + SIMCONNECT_RECV_EVENT +): # when dwID == SIMCONNECT_RECV_ID_EVENT_FILENAME + eObjType = SIMCONNECT_SIMOBJECT_TYPE + + +class SIMCONNECT_RECV_EVENT_FRAME( + SIMCONNECT_RECV_EVENT +): # when dwID == SIMCONNECT_RECV_ID_EVENT_FRAME + _fields_ = [("fFrameRate", c_float), ("fSimSpeed", c_float)] + + +class SIMCONNECT_RECV_EVENT_MULTIPLAYER_SERVER_STARTED(SIMCONNECT_RECV_EVENT): + # when dwID == SIMCONNECT_RECV_ID_EVENT_MULTIPLAYER_SERVER_STARTED + # No event specific data, for now + pass + + +class SIMCONNECT_RECV_EVENT_MULTIPLAYER_CLIENT_STARTED(SIMCONNECT_RECV_EVENT): + # when dwID == SIMCONNECT_RECV_ID_EVENT_MULTIPLAYER_CLIENT_STARTED + # No event specific data, for now + pass + + +class SIMCONNECT_RECV_EVENT_MULTIPLAYER_SESSION_ENDED(SIMCONNECT_RECV_EVENT): + # when dwID == SIMCONNECT_RECV_ID_EVENT_MULTIPLAYER_SESSION_ENDED + # No event specific data, for now + pass + + +# SIMCONNECT_DATA_RACE_RESULT +class SIMCONNECT_DATA_RACE_RESULT(Structure): + _fields_ = [ + ("dwNumberOfRacers", DWORD), # The total number of racers + ("szPlayerName", c_char * MAX_PATH), # The name of the player + ( + "szSessionType", + c_char * MAX_PATH, + ), # The type of the multiplayer session: "LAN", "GAMESPY") + ("szAircraft", c_char * MAX_PATH), # The aircraft type + ("szPlayerRole", c_char * MAX_PATH), # The player role in the mission + ("fTotalTime", c_double), # Total time in seconds, 0 means DNF + ("fPenaltyTime", c_double), # Total penalty time in seconds + ( + "MissionGUID", + DWORD, + ), # The name of the mission to execute, NULL if no mission + ("dwIsDisqualified", c_double), # non 0 - disqualified, 0 - not disqualified + ] + + +class SIMCONNECT_RECV_EVENT_RACE_END( + SIMCONNECT_RECV_EVENT +): # when dwID == SIMCONNECT_RECV_ID_EVENT_RACE_END + RacerData = SIMCONNECT_DATA_RACE_RESULT + _fields_ = [("dwRacerNumber", DWORD)] # The index of the racer the results are for + + +class SIMCONNECT_RECV_EVENT_RACE_LAP( + SIMCONNECT_RECV_EVENT +): # when dwID == SIMCONNECT_RECV_ID_EVENT_RACE_LAP + RacerData = SIMCONNECT_DATA_RACE_RESULT + _fields_ = [("dwLapIndex", DWORD)] # The index of the lap the results are for + + +class SIMCONNECT_RECV_SIMOBJECT_DATA(SIMCONNECT_RECV): + _fields_ = [ + ("dwRequestID", DWORD), + ("dwObjectID", DWORD), + ("dwDefineID", DWORD), + ("dwFlags", DWORD), + ("dwentrynumber", DWORD), + ("dwoutof", DWORD), + ("dwDefineCount", DWORD), + ("dwData", DWORD * 8192), + ] + + +class SIMCONNECT_RECV_SIMOBJECT_DATA_BYTYPE(SIMCONNECT_RECV_SIMOBJECT_DATA): + _fields_ = [] + + +class SIMCONNECT_RECV_CLIENT_DATA( + SIMCONNECT_RECV_SIMOBJECT_DATA +): # when dwID == SIMCONNECT_RECV_ID_CLIENT_DATA + _fields_ = [] + + +class SIMCONNECT_RECV_WEATHER_OBSERVATION( + SIMCONNECT_RECV +): # when dwID == SIMCONNECT_RECV_ID_WEATHER_OBSERVATION + _fields_ = [ + ("dwRequestID", DWORD), + ( + "szMetar", + c_char * MAX_METAR_LENGTH.value, + ), # Variable length string whose maximum size is MAX_METAR_LENGTH + ] + + +SIMCONNECT_CLOUD_STATE_ARRAY_WIDTH = 64 +SIMCONNECT_CLOUD_STATE_ARRAY_SIZE = ( + SIMCONNECT_CLOUD_STATE_ARRAY_WIDTH * SIMCONNECT_CLOUD_STATE_ARRAY_WIDTH +) + + +class SIMCONNECT_RECV_CLOUD_STATE(SIMCONNECT_RECV): + # when dwID == SIMCONNECT_RECV_ID_CLOUD_STATE + _fields_ = [ + ("dwRequestID", DWORD), + ("dwArraySize", DWORD), + # SIMCONNECT_FIXEDTYPE_DATAV(BYTE, rgbData, dwArraySize, U1 /*member of UnmanagedType enum*/ , System::Byte /*cli type*/); + ] + + +class SIMCONNECT_RECV_ASSIGNED_OBJECT_ID( + SIMCONNECT_RECV +): # when dwID == SIMCONNECT_RECV_ID_ASSIGNED_OBJECT_ID + _fields_ = [("dwRequestID", DWORD), ("dwObjectID", DWORD)] + + +class SIMCONNECT_RECV_RESERVED_KEY( + SIMCONNECT_RECV +): # when dwID == SIMCONNECT_RECV_ID_RESERVED_KEY + _fields_ = [("szChoiceReserved", c_char * 30), ("szReservedKey", c_char * 30)] + + +class SIMCONNECT_RECV_SYSTEM_STATE( + SIMCONNECT_RECV +): # when dwID == SIMCONNECT_RECV_ID_SYSTEM_STATE + _fields_ = [ + ("dwRequestID", DWORD), + ("dwInteger", DWORD), + ("fFloat", c_float), + ("szString", c_char * MAX_PATH), + ] + + +class SIMCONNECT_RECV_CUSTOM_ACTION(SIMCONNECT_RECV_EVENT): # + _fields_ = [ + ("guidInstanceId", DWORD), # Instance id of the action that executed + ("dwWaitForCompletion", DWORD), # Wait for completion flag on the action + ( + "szPayLoad", + c_char, + ), # Variable length string payload associated with the mission action. + ] + + +class SIMCONNECT_RECV_EVENT_WEATHER_MODE(SIMCONNECT_RECV_EVENT): # + _fields_ = ( + [] + ) # No event specific data - the new weather mode is in the base structure dwData member. + + +# SIMCONNECT_RECV_FACILITIES_LIST +class SIMCONNECT_RECV_FACILITIES_LIST(SIMCONNECT_RECV): # + _fields_ = [ + ("dwRequestID", DWORD), + ("dwArraySize", DWORD), + ( + "dwEntryNumber", + DWORD, + ), # when the array of items is too big for one send, which send this is (0..dwOutOf-1) + ("dwOutOf", DWORD), # total number of transmissions the list is chopped into + ] + + +# SIMCONNECT_DATA_FACILITY_AIRPORT +class SIMCONNECT_DATA_FACILITY_AIRPORT(Structure): # + _fields_ = [ + ("Icao", c_char * 9), # ICAO of the object + ("Latitude", c_double), # degrees + ("Longitude", c_double), # degrees + ("Altitude", c_double), # meters + ] + + +# SIMCONNECT_RECV_AIRPORT_LIST +# class SIMCONNECT_RECV_AIRPORT_LIST(SIMCONNECT_RECV_FACILITIES_LIST): # +# _fields_ = [ +# ("SIMCONNECT_DATA_FACILITY_AIRPORT", rgData * dwArraySize) +# ] +# SIMCONNECT_FIXEDTYPE_DATAV(SIMCONNECT_DATA_FACILITY_AIRPORT, rgData, dwArraySize, +# U1 /*member of UnmanagedType enum*/, SIMCONNECT_DATA_FACILITY_AIRPORT /*cli type*/); + + +# SIMCONNECT_DATA_FACILITY_WAYPOINT +class SIMCONNECT_DATA_FACILITY_WAYPOINT(SIMCONNECT_DATA_FACILITY_AIRPORT): # + _fields_ = [("fMagVar", c_float)] # Magvar in degrees + + +# SIMCONNECT_RECV_WAYPOINT_LIST +# class SIMCONNECT_RECV_WAYPOINT_LIST(SIMCONNECT_RECV_FACILITIES_LIST): # +# _fields_ = [ +# ("", ) +# ] +# SIMCONNECT_FIXEDTYPE_DATAV(SIMCONNECT_DATA_FACILITY_WAYPOINT, +# rgData +# dwArraySize, +# U1 /*member of UnmanagedType enum*/, +# SIMCONNECT_DATA_FACILITY_WAYPOINT /*cli type*/); + + +# SIMCONNECT_DATA_FACILITY_NDB +class SIMCONNECT_DATA_FACILITY_NDB(SIMCONNECT_DATA_FACILITY_WAYPOINT): # + _fields_ = [("fFrequency", DWORD)] # frequency in Hz + + +# SIMCONNECT_RECV_NDB_LIST +# class SIMCONNECT_RECV_NDB_LIST(SIMCONNECT_RECV_FACILITIES_LIST): # +# _fields_ = [ +# ("", ) +# ] +# SIMCONNECT_FIXEDTYPE_DATAV(SIMCONNECT_DATA_FACILITY_NDB, +# rgData +# dwArraySize, +# U1 /*member of UnmanagedType enum*/, +# SIMCONNECT_DATA_FACILITY_NDB /*cli type*/); + + +# SIMCONNECT_DATA_FACILITY_VOR +class SIMCONNECT_DATA_FACILITY_VOR(SIMCONNECT_DATA_FACILITY_NDB): # + _fields_ = [ + ("Flags", DWORD), # SIMCONNECT_VOR_FLAGS + ("fLocalizer", c_float), # Localizer in degrees + ("GlideLat", c_double), # Glide Slope Location (deg, deg, meters) + ("GlideLon", c_double), # + ("GlideAlt", c_double), # + ("fGlideSlopeAngle", c_float), # Glide Slope in degrees + ] + + +# SIMCONNECT_RECV_VOR_LIST +# class SIMCONNECT_RECV_VOR_LIST(SIMCONNECT_RECV_FACILITIES_LIST): # +# _fields_ = [ +# ("", ) +# ] +# SIMCONNECT_FIXEDTYPE_DATAV(SIMCONNECT_DATA_FACILITY_VOR, +# rgData +# dwArraySize, +# U1 /*member of UnmanagedType enum*/, SIMCONNECT_DATA_FACILITY_VOR /*cli type*/); + + +class SIMCONNECT_RECV_PICK( + SIMCONNECT_RECV +): # when dwID == SIMCONNECT_RECV_ID_RESERVED_KEY + _fields_ = [ + ("hContext", HANDLE), + ("dwFlags", DWORD), + ("Latitude", c_double), # degrees + ("Longitude", c_double), # degrees + ("Altitude", c_double), # feet + ("xPos", c_int), # reserved + ("yPos", c_int), # reserved + ("dwSimObjectID", DWORD), + ("hSceneryObject", HANDLE), + ( + "dwentrynumber", + DWORD, + ), # if multiple objects returned, this is number out of . + ("dwoutof", DWORD), # note: starts with 1, not 0. + ] + + +# SIMCONNECT_DATATYPE_INITPOSITION +class SIMCONNECT_DATA_INITPOSITION(Structure): # + _fields_ = [ + ("Latitude", c_double), # degrees + ("Longitude", c_double), # degrees + ("Altitude", c_double), # feet + ("Pitch", c_double), # degrees + ("Bank", c_double), # degrees + ("Heading", c_double), # degrees + ("OnGround", DWORD), # 1=force to be on the ground + ("Airspeed", DWORD), # knots + ] + + +# SIMCONNECT_DATATYPE_MARKERSTATE +class SIMCONNECT_DATA_MARKERSTATE(Structure): # + _fields_ = [("szMarkerName", c_char * 64), ("dwMarkerState", DWORD)] + + +# SIMCONNECT_DATATYPE_WAYPOINT +class SIMCONNECT_DATA_WAYPOINT(Structure): # + _fields_ = [ + ("Latitude", c_double), # degrees + ("Longitude", c_double), # degrees + ("Altitude", c_double), # feet + ("Flags", c_ulong), + ("ktsSpeed", c_double), # knots + ("percentThrottle", c_double), + ] + + +# SIMCONNECT_DATA_LATLONALT +class SIMCONNECT_DATA_LATLONALT(Structure): # + _fields_ = [("Latitude", c_double), ("Longitude", c_double), ("Altitude", c_double)] + + +# SIMCONNECT_DATA_XYZ +class SIMCONNECT_DATA_XYZ(Structure): # + _fields_ = [("x", c_double), ("y", c_double), ("z", c_double)] diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect/EventList.py b/templates/skills/msfs2020_control/dependencies/SimConnect/EventList.py new file mode 100644 index 00000000..bec37d14 --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect/EventList.py @@ -0,0 +1,1225 @@ +from SimConnect import * + + +class Event(object): + + def __call__(self, value=0): + if self.event is None: + self.event = self.sm.map_to_sim_event(self.deff) + self.sm.send_event(self.event, DWORD(value)) + + def __init__(self, _deff, _sm, _dec=''): + self.deff = _deff + self.event = None + self.description = _dec + self.sm = _sm + + +class EventHelper: + def __init__(self, _sm): + self.sm = _sm + + def __getattr__(self, _name): + for key in self.list: + if _name == key[0].decode(): + ne = Event(key[0], self.sm, _dec=key[1]) + setattr(self, _name, ne) + return ne + return None + + def get(self, _name): + return getattr(self, _name) + + def set(self, _name, _value=0): + setattr(self, _name, _value) + + +class AircraftEvents(): + def __init__(self, _sm): + self.sm = _sm + self.list = [] + self.Engine = self.__Engine(_sm) + self.list.append(self.Engine) + self.Flight_Controls = self.__Flight_Controls(_sm) + self.list.append(self.Flight_Controls) + self.Autopilot = self.__Autopilot(_sm) + self.list.append(self.Autopilot) + self.Fuel_System = self.__Fuel_System(_sm) + self.list.append(self.Fuel_System) + self.Fuel_Selection_Keys = self.__Fuel_Selection_Keys(_sm) + self.list.append(self.Fuel_Selection_Keys) + self.Avionics = self.__Avionics(_sm) + self.list.append(self.Avionics) + self.Instruments = self.__Instruments(_sm) + self.list.append(self.Instruments) + self.Lights = self.__Lights(_sm) + self.list.append(self.Lights) + self.Failures = self.__Failures(_sm) + self.list.append(self.Failures) + self.Miscellaneous_Systems = self.__Miscellaneous_Systems(_sm) + self.list.append(self.Miscellaneous_Systems) + self.Nose_wheel_steering = self.__Nose_wheel_steering(_sm) + self.list.append(self.Nose_wheel_steering) + self.Cabin_pressurization = self.__Cabin_pressurization(_sm) + self.list.append(self.Cabin_pressurization) + self.Catapult_Launches = self.__Catapult_Launches(_sm) + self.list.append(self.Catapult_Launches) + self.Helicopter_Specific_Systems = self.__Helicopter_Specific_Systems(_sm) + self.list.append(self.Helicopter_Specific_Systems) + self.Slings_and_Hoists = self.__Slings_and_Hoists(_sm) + self.list.append(self.Slings_and_Hoists) + self.Slew_System = self.__Slew_System(_sm) + self.list.append(self.Slew_System) + self.View_System = self.__View_System(_sm) + self.list.append(self.View_System) + self.Miscellaneous_Events = self.__Miscellaneous_Events(_sm) + self.list.append(self.Miscellaneous_Events) + self.Freezing_position = self.__Freezing_position(_sm) + self.list.append(self.Freezing_position) + self.Mission_Keys = self.__Mission_Keys(_sm) + self.list.append(self.Mission_Keys) + self.ATC = self.__ATC(_sm) + self.list.append(self.ATC) + self.Multiplayer = self.__Multiplayer(_sm) + self.list.append(self.Multiplayer) + + def find(self, key): + for clas in self.list: + for test in clas.list: + if key == test[0].decode(): + return getattr(clas, key) + return None + + class __Engine(EventHelper): + list = [ + (b'THROTTLE_FULL', "Set throttles max", "Shared Cockpit"), + (b'THROTTLE_INCR', "Increment throttles", "Shared Cockpit"), + (b'THROTTLE_INCR_SMALL', "Increment throttles small", "Shared Cockpit"), + (b'THROTTLE_DECR', "Decrement throttles", "Shared Cockpit"), + (b'THROTTLE_DECR_SMALL', "Decrease throttles small", "Shared Cockpit"), + (b'THROTTLE_CUT', "Set throttles to idle", "Shared Cockpit"), + (b'INCREASE_THROTTLE', "Increment throttles", "Shared Cockpit"), + (b'DECREASE_THROTTLE', "Decrement throttles", "Shared Cockpit"), + (b'THROTTLE_SET', "Set throttles exactly (0- 16383),", "Shared Cockpit"), + (b'AXIS_THROTTLE_SET', "Set throttles (0- 16383),", "Shared Cockpit (Pilot only, transmitted to Co-pilot if in a helicopter, not-transmitted otherwise)."), + (b'THROTTLE1_SET', "Set throttle 1 exactly (0 to 16383),", "Shared Cockpit"), + (b'THROTTLE2_SET', "Set throttle 2 exactly (0 to 16383),", "Shared Cockpit"), + (b'THROTTLE3_SET', "Set throttle 3 exactly (0 to 16383),", "Shared Cockpit"), + (b'THROTTLE4_SET', "Set throttle 4 exactly (0 to 16383),", "Shared Cockpit"), + (b'THROTTLE1_FULL', "Set throttle 1 max", "Shared Cockpit"), + (b'THROTTLE1_INCR', "Increment throttle 1", "Shared Cockpit"), + (b'THROTTLE1_INCR_SMALL', "Increment throttle 1 small", "Shared Cockpit"), + (b'THROTTLE1_DECR', "Decrement throttle 1", "Shared Cockpit"), + (b'THROTTLE1_CUT', "Set throttle 1 to idle", "Shared Cockpit"), + (b'THROTTLE2_FULL', "Set throttle 2 max", "Shared Cockpit"), + (b'THROTTLE2_INCR', "Increment throttle 2", "Shared Cockpit"), + (b'THROTTLE2_INCR_SMALL', "Increment throttle 2 small", "Shared Cockpit"), + (b'THROTTLE2_DECR', "Decrement throttle 2", "Shared Cockpit"), + (b'THROTTLE2_CUT', "Set throttle 2 to idle", "Shared Cockpit"), + (b'THROTTLE3_FULL', "Set throttle 3 max", "Shared Cockpit"), + (b'THROTTLE3_INCR', "Increment throttle 3", "Shared Cockpit"), + (b'THROTTLE3_INCR_SMALL', "Increment throttle 3 small", "Shared Cockpit"), + (b'THROTTLE3_DECR', "Decrement throttle 3", "Shared Cockpit"), + (b'THROTTLE3_CUT', "Set throttle 3 to idle", "Shared Cockpit"), + (b'THROTTLE4_FULL', "Set throttle 1 max", "Shared Cockpit"), + (b'THROTTLE4_INCR', "Increment throttle 4", "Shared Cockpit"), + (b'THROTTLE4_INCR_SMALL', "Increment throttle 4 small", "Shared Cockpit"), + (b'THROTTLE4_DECR', "Decrement throttle 4", "Shared Cockpit"), + (b'THROTTLE4_CUT', "Set throttle 4 to idle", "Shared Cockpit"), + (b'THROTTLE_10', "Set throttles to 10%", "Shared Cockpit"), + (b'THROTTLE_20', "Set throttles to 20%", "Shared Cockpit"), + (b'THROTTLE_30', "Set throttles to 30%", "Shared Cockpit"), + (b'THROTTLE_40', "Set throttles to 40%", "Shared Cockpit"), + (b'THROTTLE_50', "Set throttles to 50%", "Shared Cockpit"), + (b'THROTTLE_60', "Set throttles to 60%", "Shared Cockpit"), + (b'THROTTLE_70', "Set throttles to 70%", "Shared Cockpit"), + (b'THROTTLE_80', "Set throttles to 80%", "Shared Cockpit"), + (b'THROTTLE_90', "Set throttles to 90%", "Shared Cockpit"), + (b'AXIS_THROTTLE1_SET', "Set throttle 1 exactly (-16383 - +16383),", "Shared Cockpit"), + (b'AXIS_THROTTLE2_SET', "Set throttle 2 exactly (-16383 - +16383),", "Shared Cockpit"), + (b'AXIS_THROTTLE3_SET', "Set throttle 3 exactly (-16383 - +16383),", "Shared Cockpit"), + (b'AXIS_THROTTLE4_SET', "Set throttle 4 exactly (-16383 - +16383),", "Shared Cockpit"), + (b'THROTTLE1_DECR_SMALL', "Decrease throttle 1 small", "Shared Cockpit"), + (b'THROTTLE2_DECR_SMALL', "Decrease throttle 2 small", "Shared Cockpit"), + (b'THROTTLE3_DECR_SMALL', "Decrease throttle 3 small", "Shared Cockpit"), + (b'THROTTLE4_DECR_SMALL', "Decrease throttle 4 small", "Shared Cockpit"), + (b'PROP_PITCH_DECR_SMALL', "Decrease prop levers small", "Shared Cockpit"), + (b'PROP_PITCH1_DECR_SMALL', "Decrease prop lever 1 small", "Shared Cockpit"), + (b'PROP_PITCH2_DECR_SMALL', "Decrease prop lever 2 small", "Shared Cockpit"), + (b'PROP_PITCH3_DECR_SMALL', "Decrease prop lever 3 small", "Shared Cockpit"), + (b'PROP_PITCH4_DECR_SMALL', "Decrease prop lever 4 small", "Shared Cockpit"), + (b'MIXTURE1_RICH', "Set mixture lever 1 to max rich", "Shared Cockpit"), + (b'MIXTURE1_INCR', "Increment mixture lever 1", "Shared Cockpit"), + (b'MIXTURE1_INCR_SMALL', "Increment mixture lever 1 small", "Shared Cockpit"), + (b'MIXTURE1_DECR', "Decrement mixture lever 1", "Shared Cockpit"), + (b'MIXTURE1_LEAN', "Set mixture lever 1 to max lean", "Shared Cockpit"), + (b'MIXTURE2_RICH', "Set mixture lever 2 to max rich", "Shared Cockpit"), + (b'MIXTURE2_INCR', "Increment mixture lever 2", "Shared Cockpit"), + (b'MIXTURE2_INCR_SMALL', "Increment mixture lever 2 small", "Shared Cockpit"), + (b'MIXTURE2_DECR', "Decrement mixture lever 2", "Shared Cockpit"), + (b'MIXTURE2_LEAN', "Set mixture lever 2 to max lean", "Shared Cockpit"), + (b'MIXTURE3_RICH', "Set mixture lever 3 to max rich", "Shared Cockpit"), + (b'MIXTURE3_INCR', "Increment mixture lever 3", "Shared Cockpit"), + (b'MIXTURE3_INCR_SMALL', "Increment mixture lever 3 small", "Shared Cockpit"), + (b'MIXTURE3_DECR', "Decrement mixture lever 3", "Shared Cockpit"), + (b'MIXTURE3_LEAN', "Set mixture lever 3 to max lean", "Shared Cockpit"), + (b'MIXTURE4_RICH', "Set mixture lever 4 to max rich", "Shared Cockpit"), + (b'MIXTURE4_INCR', "Increment mixture lever 4", "Shared Cockpit"), + (b'MIXTURE4_INCR_SMALL', "Increment mixture lever 4 small", "Shared Cockpit"), + (b'MIXTURE4_DECR', "Decrement mixture lever 4", "Shared Cockpit"), + (b'MIXTURE4_LEAN', "Set mixture lever 4 to max lean", "Shared Cockpit"), + (b'MIXTURE_SET', "Set mixture levers to exact value (0 to 16383),", "Shared Cockpit"), + (b'MIXTURE_RICH', "Set mixture levers to max rich", "Shared Cockpit"), + (b'MIXTURE_INCR', "Increment mixture levers", "Shared Cockpit"), + (b'MIXTURE_INCR_SMALL', "Increment mixture levers small", "Shared Cockpit"), + (b'MIXTURE_DECR', "Decrement mixture levers", "Shared Cockpit"), + (b'MIXTURE_LEAN', "Set mixture levers to max lean", "Shared Cockpit"), + (b'MIXTURE1_SET', "Set mixture lever 1 exact value (0 to 16383),", "Shared Cockpit"), + (b'MIXTURE2_SET', "Set mixture lever 2 exact value (0 to 16383),", "Shared Cockpit"), + (b'MIXTURE3_SET', "Set mixture lever 3 exact value (0 to 16383),", "Shared Cockpit"), + (b'MIXTURE4_SET', "Set mixture lever 4 exact value (0 to 16383),", "Shared Cockpit"), + (b'AXIS_MIXTURE_SET', "Set mixture lever 1 exact value (-16383 to +16383),", "Shared Cockpit"), + (b'AXIS_MIXTURE1_SET', "Set mixture lever 1 exact value (-16383 to +16383),", "Shared Cockpit"), + (b'AXIS_MIXTURE2_SET', "Set mixture lever 2 exact value (-16383 to +16383),", "Shared Cockpit"), + (b'AXIS_MIXTURE3_SET', "Set mixture lever 3 exact value (-16383 to +16383),", "Shared Cockpit"), + (b'AXIS_MIXTURE4_SET', "Set mixture lever 4 exact value (-16383 to +16383),", "Shared Cockpit"), + (b'MIXTURE_SET_BEST', "Set mixture levers to current best power setting", "Shared Cockpit"), + (b'MIXTURE_DECR_SMALL', "Decrement mixture levers small", "Shared Cockpit"), + (b'MIXTURE1_DECR_SMALL', "Decrement mixture lever 1 small", "Shared Cockpit"), + (b'MIXTURE2_DECR_SMALL', "Decrement mixture lever 4 small", "Shared Cockpit"), + (b'MIXTURE3_DECR_SMALL', "Decrement mixture lever 4 small", "Shared Cockpit"), + (b'MIXTURE4_DECR_SMALL', "Decrement mixture lever 4 small", "Shared Cockpit"), + (b'PROP_PITCH_SET', "Set prop pitch levers (0 to 16383),", "Shared Cockpit"), + (b'PROP_PITCH_LO', "Set prop pitch levers max (lo pitch),", "Shared Cockpit"), + (b'PROP_PITCH_INCR', "Increment prop pitch levers", "Shared Cockpit"), + (b'PROP_PITCH_INCR_SMALL', "Increment prop pitch levers small", "Shared Cockpit"), + (b'PROP_PITCH_DECR', "Decrement prop pitch levers", "Shared Cockpit"), + (b'PROP_PITCH_HI', "Set prop pitch levers min (hi pitch),", "Shared Cockpit"), + (b'PROP_PITCH1_SET', "Set prop pitch lever 1 exact value (0 to 16383),", "Shared Cockpit"), + (b'PROP_PITCH2_SET', "Set prop pitch lever 2 exact value (0 to 16383),", "Shared Cockpit"), + (b'PROP_PITCH3_SET', "Set prop pitch lever 3 exact value (0 to 16383),", "Shared Cockpit"), + (b'PROP_PITCH4_SET', "Set prop pitch lever 4 exact value (0 to 16383),", "Shared Cockpit"), + (b'PROP_PITCH1_LO', "Set prop pitch lever 1 max (lo pitch),", "Shared Cockpit"), + (b'PROP_PITCH1_INCR', "Increment prop pitch lever 1", "Shared Cockpit"), + (b'PROP_PITCH1_INCR_SMALL', "Increment prop pitch lever 1 small", "Shared Cockpit"), + (b'PROP_PITCH1_DECR', "Decrement prop pitch lever 1", "Shared Cockpit"), + (b'PROP_PITCH1_HI', "Set prop pitch lever 1 min (hi pitch),", "Shared Cockpit"), + (b'PROP_PITCH2_LO', "Set prop pitch lever 2 max (lo pitch),", "Shared Cockpit"), + (b'PROP_PITCH2_INCR', "Increment prop pitch lever 2", "Shared Cockpit"), + (b'PROP_PITCH2_INCR_SMALL', "Increment prop pitch lever 2 small", "Shared Cockpit"), + (b'PROP_PITCH2_DECR', "Decrement prop pitch lever 2", "Shared Cockpit"), + (b'PROP_PITCH2_HI', "Set prop pitch lever 2 min (hi pitch),", "Shared Cockpit"), + (b'PROP_PITCH3_LO', "Set prop pitch lever 3 max (lo pitch),", "Shared Cockpit"), + (b'PROP_PITCH3_INCR', "Increment prop pitch lever 3", "Shared Cockpit"), + (b'PROP_PITCH3_INCR_SMALL', "Increment prop pitch lever 3 small", "Shared Cockpit"), + (b'PROP_PITCH3_DECR', "Decrement prop pitch lever 3", "Shared Cockpit"), + (b'PROP_PITCH3_HI', "Set prop pitch lever 3 min (hi pitch),", "Shared Cockpit"), + (b'PROP_PITCH4_LO', "Set prop pitch lever 4 max (lo pitch),", "Shared Cockpit"), + (b'PROP_PITCH4_INCR', "Increment prop pitch lever 4", "Shared Cockpit"), + (b'PROP_PITCH4_INCR_SMALL', "Increment prop pitch lever 4 small", "Shared Cockpit"), + (b'PROP_PITCH4_DECR', "Decrement prop pitch lever 4", "Shared Cockpit"), + (b'PROP_PITCH4_HI', "Set prop pitch lever 4 min (hi pitch),", "Shared Cockpit"), + (b'AXIS_PROPELLER_SET', "Set propeller levers exact value (-16383 to +16383),", "Shared Cockpit"), + (b'AXIS_PROPELLER1_SET', "Set propeller lever 1 exact value (-16383 to +16383),", "Shared Cockpit"), + (b'AXIS_PROPELLER2_SET', "Set propeller lever 2 exact value (-16383 to +16383),", "Shared Cockpit"), + (b'AXIS_PROPELLER3_SET', "Set propeller lever 3 exact value (-16383 to +16383),", "Shared Cockpit"), + (b'AXIS_PROPELLER4_SET', "Set propeller lever 4 exact value (-16383 to +16383),", "Shared Cockpit"), + (b'JET_STARTER', "Selects jet engine starter (for +/- sequence),", "Shared Cockpit"), + (b'MAGNETO_SET', "Sets magnetos (0,1),", "Shared Cockpit"), + (b'TOGGLE_STARTER1', "Toggle starter 1", "Shared Cockpit"), + (b'TOGGLE_STARTER2', "Toggle starter 2", "Shared Cockpit"), + (b'TOGGLE_STARTER3', "Toggle starter 3", "Shared Cockpit"), + (b'TOGGLE_STARTER4', "Toggle starter 4", "Shared Cockpit"), + (b'TOGGLE_ALL_STARTERS', "Toggle starters", "Shared Cockpit"), + (b'ENGINE_AUTO_START', "Triggers auto-start", "Shared Cockpit"), + (b'ENGINE_AUTO_SHUTDOWN', "Triggers auto-shutdown", "Shared Cockpit"), + (b'MAGNETO', "Selects magnetos (for +/- sequence),", "Shared Cockpit"), + (b'MAGNETO_DECR', "Decrease magneto switches positions", "Shared Cockpit"), + (b'MAGNETO_INCR', "Increase magneto switches positions", "Shared Cockpit"), + (b'MAGNETO1_OFF', "Set engine 1 magnetos off", "Shared Cockpit"), + (b'MAGNETO1_RIGHT', "Toggle engine 1 right magneto", "All aircraft"), + (b'MAGNETO1_LEFT', "Toggle engine 1 left magneto", "All aircraft"), + (b'MAGNETO1_BOTH', "Set engine 1 magnetos on", "Shared Cockpit"), + (b'MAGNETO1_START', "Set engine 1 magnetos on and toggle starter", "Shared Cockpit"), + (b'MAGNETO2_OFF', "Set engine 2 magnetos off", "Shared Cockpit"), + (b'MAGNETO2_RIGHT', "Toggle engine 2 right magneto", "All aircraft"), + (b'MAGNETO2_LEFT', "Toggle engine 2 left magneto", "All aircraft"), + (b'MAGNETO2_BOTH', "Set engine 2 magnetos on", "Shared Cockpit"), + (b'MAGNETO2_START', "Set engine 2 magnetos on and toggle starter", "Shared Cockpit"), + (b'MAGNETO3_OFF', "Set engine 3 magnetos off", "Shared Cockpit"), + (b'MAGNETO3_RIGHT', "Toggle engine 3 right magneto", "All aircraft"), + (b'MAGNETO3_LEFT', "Toggle engine 3 left magneto", "All aircraft"), + (b'MAGNETO3_BOTH', "Set engine 3 magnetos on", "Shared Cockpit"), + (b'MAGNETO3_START', "Set engine 3 magnetos on and toggle starter", "Shared Cockpit"), + (b'MAGNETO4_OFF', "Set engine 4 magnetos off", "Shared Cockpit"), + (b'MAGNETO4_RIGHT', "Toggle engine 4 right magneto", "All aircraft"), + (b'MAGNETO4_LEFT', "Toggle engine 4 left magneto", "All aircraft"), + (b'MAGNETO4_BOTH', "Set engine 4 magnetos on", "Shared Cockpit"), + (b'MAGNETO4_START', "Set engine 4 magnetos on and toggle starter", "Shared Cockpit"), + (b'MAGNETO_OFF', "Set engine magnetos off", "Shared Cockpit"), + (b'MAGNETO_RIGHT', "Set engine right magnetos on", "Shared Cockpit"), + (b'MAGNETO_LEFT', "Set engine left magnetos on", "Shared Cockpit"), + (b'MAGNETO_BOTH', "Set engine magnetos on", "Shared Cockpit"), + (b'MAGNETO_START', "Set engine magnetos on and toggle starters", "Shared Cockpit"), + (b'MAGNETO1_DECR', "Decrease engine 1 magneto switch position", "Shared Cockpit"), + (b'MAGNETO1_INCR', "Increase engine 1 magneto switch position", "Shared Cockpit"), + (b'MAGNETO2_DECR', "Decrease engine 2 magneto switch position", "Shared Cockpit"), + (b'MAGNETO2_INCR', "Increase engine 2 magneto switch position", "Shared Cockpit"), + (b'MAGNETO3_DECR', "Decrease engine 3 magneto switch position", "Shared Cockpit"), + (b'MAGNETO3_INCR', "Increase engine 3 magneto switch position", "Shared Cockpit"), + (b'MAGNETO4_DECR', "Decrease engine 4 magneto switch position", "Shared Cockpit"), + (b'MAGNETO4_INCR', "Increase engine 4 magneto switch position", "Shared Cockpit"), + (b'Not supported', "Set engine magneto switches", "Shared Cockpit"), + (b'MAGNETO1_SET', "Set engine 1 magneto switch", "Shared Cockpit"), + (b'MAGNETO2_SET', "Set engine 2 magneto switch", "Shared Cockpit"), + (b'MAGNETO3_SET', "Set engine 3 magneto switch", "Shared Cockpit"), + (b'MAGNETO4_SET', "Set engine 4 magneto switch", "Shared Cockpit"), + (b'ANTI_ICE_ON', "Sets anti-ice switches on", "Shared Cockpit"), + (b'ANTI_ICE_OFF', "Sets anti-ice switches off", "Shared Cockpit"), + (b'ANTI_ICE_SET', "Sets anti-ice switches from argument (0,1),", "Shared Cockpit"), + (b'ANTI_ICE_TOGGLE', "Toggle anti-ice switches", "Shared Cockpit"), + (b'ANTI_ICE_TOGGLE_ENG1', "Toggle engine 1 anti-ice switch", "Shared Cockpit"), + (b'ANTI_ICE_TOGGLE_ENG2', "Toggle engine 2 anti-ice switch", "Shared Cockpit"), + (b'ANTI_ICE_TOGGLE_ENG3', "Toggle engine 3 anti-ice switch", "Shared Cockpit"), + (b'ANTI_ICE_TOGGLE_ENG4', "Toggle engine 4 anti-ice switch", "Shared Cockpit"), + (b'ANTI_ICE_SET_ENG1', "Sets engine 1 anti-ice switch (0,1),", "Shared Cockpit"), + (b'ANTI_ICE_SET_ENG2', "Sets engine 2 anti-ice switch (0,1),", "Shared Cockpit"), + (b'ANTI_ICE_SET_ENG3', "Sets engine 3 anti-ice switch (0,1),", "Shared Cockpit"), + (b'ANTI_ICE_SET_ENG4', "Sets engine 4 anti-ice switch (0,1),", "Shared Cockpit"), + (b'TOGGLE_FUEL_VALVE_ALL', "Toggle engine fuel valves", "Shared Cockpit"), + (b'TOGGLE_FUEL_VALVE_ENG1', "Toggle engine 1 fuel valve", "All aircraft"), + (b'TOGGLE_FUEL_VALVE_ENG2', "Toggle engine 2 fuel valve", "All aircraft"), + (b'TOGGLE_FUEL_VALVE_ENG3', "Toggle engine 3 fuel valve", "All aircraft"), + (b'TOGGLE_FUEL_VALVE_ENG4', "Toggle engine 4 fuel valve", "All aircraft"), + (b'COWLFLAP1_SET', "Sets engine 1 cowl flap lever position (0 to 16383),", "Shared Cockpit"), + (b'COWLFLAP2_SET', "Sets engine 2 cowl flap lever position (0 to 16383),", "Shared Cockpit"), + (b'COWLFLAP3_SET', "Sets engine 3 cowl flap lever position (0 to 16383),", "Shared Cockpit"), + (b'COWLFLAP4_SET', "Sets engine 4 cowl flap lever position (0 to 16383),", "Shared Cockpit"), + (b'INC_COWL_FLAPS', "Increment cowl flap levers", "Shared Cockpit"), + (b'DEC_COWL_FLAPS', "Decrement cowl flap levers", "Shared Cockpit"), + (b'INC_COWL_FLAPS1', "Increment engine 1 cowl flap lever", "Shared Cockpit"), + (b'DEC_COWL_FLAPS1', "Decrement engine 1 cowl flap lever", "Shared Cockpit"), + (b'INC_COWL_FLAPS2', "Increment engine 2 cowl flap lever", "Shared Cockpit"), + (b'DEC_COWL_FLAPS2', "Decrement engine 2 cowl flap lever", "Shared Cockpit"), + (b'INC_COWL_FLAPS3', "Increment engine 3 cowl flap lever", "Shared Cockpit"), + (b'DEC_COWL_FLAPS3', "Decrement engine 3 cowl flap lever", "Shared Cockpit"), + (b'INC_COWL_FLAPS4', "Increment engine 4 cowl flap lever", "Shared Cockpit"), + (b'DEC_COWL_FLAPS4', "Decrement engine 4 cowl flap lever", "Shared Cockpit"), + (b'FUEL_PUMP', "Toggle electric fuel pumps", "Shared Cockpit"), + (b'TOGGLE_ELECT_FUEL_PUMP', "Toggle electric fuel pumps", "Shared Cockpit"), + (b'TOGGLE_ELECT_FUEL_PUMP1', "Toggle engine 1 electric fuel pump", "All aircraft"), + (b'TOGGLE_ELECT_FUEL_PUMP2', "Toggle engine 2 electric fuel pump", "All aircraft"), + (b'TOGGLE_ELECT_FUEL_PUMP3', "Toggle engine 3 electric fuel pump", "All aircraft"), + (b'TOGGLE_ELECT_FUEL_PUMP4', "Toggle engine 4 electric fuel pump", "All aircraft"), + (b'ENGINE_PRIMER', "Trigger engine primers", "Shared Cockpit"), + (b'TOGGLE_PRIMER', "Trigger engine primers", "Shared Cockpit"), + (b'TOGGLE_PRIMER1', "Trigger engine 1 primer", "Shared Cockpit"), + (b'TOGGLE_PRIMER2', "Trigger engine 2 primer", "Shared Cockpit"), + (b'TOGGLE_PRIMER3', "Trigger engine 3 primer", "Shared Cockpit"), + (b'TOGGLE_PRIMER4', "Trigger engine 4 primer", "Shared Cockpit"), + (b'TOGGLE_FEATHER_SWITCHES', "Trigger propeller switches", "Shared Cockpit"), + (b'TOGGLE_FEATHER_SWITCH_1', "Trigger propeller 1 switch", "Shared Cockpit"), + (b'TOGGLE_FEATHER_SWITCH_2', "Trigger propeller 2 switch", "Shared Cockpit"), + (b'TOGGLE_FEATHER_SWITCH_3', "Trigger propeller 3 switch", "Shared Cockpit"), + (b'TOGGLE_FEATHER_SWITCH_4', "Trigger propeller 4 switch", "Shared Cockpit"), + (b'TOGGLE_PROPELLER_SYNC', "Turns propeller synchronization switch on", "Shared Cockpit"), + (b'TOGGLE_AUTOFEATHER_ARM', "Turns auto-feather arming switch on.", "Shared Cockpit"), + (b'TOGGLE_AFTERBURNER', "Toggles afterburners", "Shared Cockpit"), + (b'TOGGLE_AFTERBURNER1', "Toggles engine 1 afterburner", "Shared Cockpit"), + (b'TOGGLE_AFTERBURNER2', "Toggles engine 2 afterburner", "Shared Cockpit"), + (b'TOGGLE_AFTERBURNER3', "Toggles engine 3 afterburner", "Shared Cockpit"), + (b'TOGGLE_AFTERBURNER4', "Toggles engine 4 afterburner", "Shared Cockpit"), + (b'ENGINE', "Sets engines for 1,2,3,4 selection (to be followed by SELECT_n),", "Shared Cockpit"), + ] + + class __Flight_Controls(EventHelper): + list = [ + (b'SPOILERS_TOGGLE', "Toggles spoiler handle ", "All aircraft"), + (b'FLAPS_UP', "Sets flap handle to full retract position", "All aircraft"), + (b'FLAPS_1', "Sets flap handle to first extension position", "All aircraft"), + (b'FLAPS_2', "Sets flap handle to second extension position", "All aircraft"), + (b'FLAPS_3', "Sets flap handle to third extension position", "All aircraft"), + (b'FLAPS_DOWN', "Sets flap handle to full extension position", "All aircraft"), + (b'ELEV_TRIM_DN', "Increments elevator trim down", "Shared Cockpit"), + (b'ELEV_DOWN', "Increments elevator down", "Shared Cockpit (Pilot only),."), + (b'AILERONS_LEFT', "Increments ailerons left", "Shared Cockpit (Pilot only),."), + (b'CENTER_AILER_RUDDER', "Centers aileron and rudder positions", "Shared Cockpit"), + (b'AILERONS_RIGHT', "Increments ailerons right", "Shared Cockpit (Pilot only),."), + (b'ELEV_TRIM_UP', "Increment elevator trim up", "Shared Cockpit"), + (b'ELEV_UP', "Increments elevator up", "Shared Cockpit (Pilot only),."), + (b'Unsupported', "Increments elevator down", "Shared Cockpit"), + (b'Unsupported', "Increments elevator up", "Shared Cockpit"), + (b'Unsupported', "Increments ailerons left", "Shared Cockpit"), + (b'Unsupported', "Centers aileron position", "Shared Cockpit"), + (b'Unsupported', "Increments ailerons right", "Shared Cockpit"), + (b'RUDDER_LEFT', "Increments rudder left", "Shared Cockpit"), + (b'RUDDER_CENTER', "Centers rudder position", "Shared Cockpit"), + (b'RUDDER_RIGHT', "Increments rudder right", "Shared Cockpit"), + (b'ELEVATOR_SET', "Sets elevator position (-16383 - +16383),", "Shared Cockpit"), + (b'AILERON_SET', "Sets aileron position (-16383 - +16383),", "Shared Cockpit"), + (b'RUDDER_SET', "Sets rudder position (-16383 - +16383),", "Shared Cockpit"), + (b'FLAPS_INCR', "Increments flap handle position", "All aircraft"), + (b'FLAPS_DECR', "Decrements flap handle position", "All aircraft"), + (b'AXIS_ELEVATOR_SET', "Sets elevator position (-16383 - +16383),", "Shared Cockpit (Pilot only, and not transmitted to Co-pilot)"), + (b'AXIS_AILERONS_SET', "Sets aileron position (-16383 - +16383),", "Shared Cockpit (Pilot only, and not transmitted to Co-pilot)"), + (b'AXIS_RUDDER_SET', "Sets rudder position (-16383 - +16383),", "Shared Cockpit (Pilot only, and not transmitted to Co-pilot)"), + (b'AXIS_ELEV_TRIM_SET', "Sets elevator trim position (-16383 - +16383),", "Shared Cockpit"), + (b'SPOILERS_SET', "Sets spoiler handle position (0 to 16383),", "All aircraft"), + (b'SPOILERS_ARM_TOGGLE', "Toggles arming of auto-spoilers", "All aircraft"), + (b'SPOILERS_ON', "Sets spoiler handle to full extend position", "All aircraft"), + (b'SPOILERS_OFF', "Sets spoiler handle to full retract position", "All aircraft"), + (b'SPOILERS_ARM_ON', "Sets auto-spoiler arming on", "All aircraft"), + (b'SPOILERS_ARM_OFF', "Sets auto-spoiler arming off", "All aircraft"), + (b'SPOILERS_ARM_SET', "Sets auto-spoiler arming (0,1),", "All aircraft"), + (b'AILERON_TRIM_LEFT', "Increments aileron trim left", "Shared Cockpit"), + (b'AILERON_TRIM_RIGHT', "Increments aileron trim right", "Shared Cockpit"), + (b'RUDDER_TRIM_LEFT', "Increments rudder trim left", "Shared Cockpit"), + (b'RUDDER_TRIM_RIGHT', "Increments aileron trim right", "Shared Cockpit"), + (b'AXIS_SPOILER_SET', "Sets spoiler handle position (-16383 - +16383),", "All aircraft"), + (b'FLAPS_SET', "Sets flap handle to closest increment (0 to 16383),", "All aircraft"), + (b'ELEVATOR_TRIM_SET', "Sets elevator trim position (0 to 16383),", "Shared Cockpit"), + (b'AXIS_FLAPS_SET', "Sets flap handle to closest increment (-16383 - +16383),", "Shared Cockpit"), + ] + + class __Autopilot(EventHelper): + list = [ + (b'AP_MASTER', "Toggles AP on/off", "Shared Cockpit"), + (b'AUTOPILOT_OFF', "Turns AP off", "Shared Cockpit"), + (b'AUTOPILOT_ON', "Turns AP on", "Shared Cockpit"), + (b'YAW_DAMPER_TOGGLE', "Toggles yaw damper on/off", "Shared Cockpit"), + (b'AP_PANEL_HEADING_HOLD', "Toggles heading hold mode on/off", "Shared Cockpit"), + (b'AP_PANEL_ALTITUDE_HOLD', "Toggles altitude hold mode on/off", "Shared Cockpit"), + (b'AP_ATT_HOLD_ON', "Turns on AP wing leveler and pitch hold mode", "Shared Cockpit"), + (b'AP_LOC_HOLD_ON', "Turns AP localizer hold on/armed and glide-slope hold mode off", "Shared Cockpit"), + (b'AP_APR_HOLD_ON', "Turns both AP localizer and glide-slope modes on/armed", "Shared Cockpit"), + (b'AP_HDG_HOLD_ON', "Turns heading hold mode on", "Shared Cockpit"), + (b'AP_ALT_HOLD_ON', "Turns altitude hold mode on", "Shared Cockpit"), + (b'AP_WING_LEVELER_ON', "Turns wing leveler mode on", "Shared Cockpit"), + (b'AP_BC_HOLD_ON', "Turns localizer back course hold mode on/armed", "Shared Cockpit"), + (b'AP_NAV1_HOLD_ON', "Turns lateral hold mode on", "Shared Cockpit"), + (b'AP_ATT_HOLD_OFF', "Turns off attitude hold mode", "Shared Cockpit"), + (b'AP_LOC_HOLD_OFF', "Turns off localizer hold mode", "Shared Cockpit"), + (b'AP_APR_HOLD_OFF', "Turns off approach hold mode", "Shared Cockpit"), + (b'AP_HDG_HOLD_OFF', "Turns off heading hold mode", "Shared Cockpit"), + (b'AP_ALT_HOLD_OFF', "Turns off altitude hold mode", "Shared Cockpit"), + (b'AP_WING_LEVELER_OFF', "Turns off wing leveler mode", "Shared Cockpit"), + (b'AP_BC_HOLD_OFF', "Turns off backcourse mode for localizer hold", "Shared Cockpit"), + (b'AP_NAV1_HOLD_OFF', "Turns off nav hold mode", "Shared Cockpit"), + (b'AP_AIRSPEED_HOLD', "Toggles airspeed hold mode", "Shared Cockpit"), + (b'AUTO_THROTTLE_ARM', "Toggles autothrottle arming mode", "Shared Cockpit"), + (b'AUTO_THROTTLE_TO_GA', "Toggles Takeoff/Go Around mode", "Shared Cockpit"), + (b'HEADING_BUG_INC', "Increments heading hold reference bug", "Shared Cockpit"), + (b'HEADING_BUG_DEC', "Decrements heading hold reference bug", "Shared Cockpit"), + (b'HEADING_BUG_SET', "Set heading hold reference bug (degrees),", "Shared Cockpit"), + (b'AP_PANEL_SPEED_HOLD', "Toggles airspeed hold mode", "Shared Cockpit"), + (b'AP_ALT_VAR_INC', "Increments reference altitude", "Shared Cockpit"), + (b'AP_ALT_VAR_DEC', "Decrements reference altitude", "Shared Cockpit"), + (b'AP_VS_VAR_INC', "Increments vertical speed reference", "Shared Cockpit"), + (b'AP_VS_VAR_DEC', "Decrements vertical speed reference", "Shared Cockpit"), + (b'AP_SPD_VAR_INC', "Increments airspeed hold reference", "Shared Cockpit"), + (b'AP_SPD_VAR_DEC', "Decrements airspeed hold reference", "Shared Cockpit"), + (b'AP_PANEL_MACH_HOLD', "Toggles mach hold", "Shared Cockpit"), + (b'AP_MACH_VAR_INC', "Increments reference mach", "Shared Cockpit"), + (b'AP_MACH_VAR_DEC', "Decrements reference mach", "Shared Cockpit"), + (b'AP_MACH_HOLD', "Toggles mach hold", "Shared Cockpit"), + (b'AP_ALT_VAR_SET_METRIC', "Sets reference altitude in meters", "Shared Cockpit"), + (b'AP_VS_VAR_SET_ENGLISH', "Sets reference vertical speed in feet per minute", "Shared Cockpit"), + (b'AP_SPD_VAR_SET', "Sets airspeed reference in knots", "Shared Cockpit"), + (b'AP_MACH_VAR_SET', "Sets mach reference", "Shared Cockpit"), + (b'YAW_DAMPER_ON', "Turns yaw damper on", "Shared Cockpit"), + (b'YAW_DAMPER_OFF', "Turns yaw damper off", "Shared Cockpit"), + (b'YAW_DAMPER_SET', "Sets yaw damper on/off (1,0),", "Shared Cockpit"), + (b'AP_AIRSPEED_ON', "Turns airspeed hold on", "Shared Cockpit"), + (b'AP_AIRSPEED_OFF', "Turns airspeed hold off", "Shared Cockpit"), + (b'AP_AIRSPEED_SET', "Sets airspeed hold on/off (1,0),", "Shared Cockpit"), + (b'AP_MACH_ON', "Turns mach hold on", "Shared Cockpit"), + (b'AP_MACH_OFF', "Turns mach hold off", "Shared Cockpit"), + (b'AP_MACH_SET', "Sets mach hold on/off (1,0),", "Shared Cockpit"), + (b'AP_PANEL_ALTITUDE_ON', "Turns altitude hold mode on (without capturing current altitude),", "Shared Cockpit"), + (b'AP_PANEL_ALTITUDE_OFF', "Turns altitude hold mode off", "Shared Cockpit"), + (b'AP_PANEL_ALTITUDE_SET', "Sets altitude hold mode on/off (1,0),", "Shared Cockpit"), + (b'AP_PANEL_HEADING_ON', "Turns heading mode on (without capturing current heading),", "Shared Cockpit"), + (b'AP_PANEL_HEADING_OFF', "Turns heading mode off", "Shared Cockpit"), + (b'AP_PANEL_HEADING_SET', "Set heading mode on/off (1,0),", "Shared Cockpit"), + (b'AP_PANEL_MACH_ON', "Turns on mach hold", "Shared Cockpit"), + (b'AP_PANEL_MACH_OFF', "Turns off mach hold", "Shared Cockpit"), + (b'AP_PANEL_MACH_SET', "Sets mach hold on/off (1,0),", "Shared Cockpit"), + (b'AP_PANEL_SPEED_ON', "Turns on speed hold mode", "Shared Cockpit"), + (b'AP_PANEL_SPEED_OFF', "Turns off speed hold mode", "Shared Cockpit"), + (b'AP_PANEL_SPEED_SET', "Set speed hold mode on/off (1,0),", "Shared Cockpit"), + (b'AP_ALT_VAR_SET_ENGLISH', "Sets altitude reference in feet", "Shared Cockpit"), + (b'AP_VS_VAR_SET_METRIC', "Sets vertical speed reference in meters per minute", "Shared Cockpit"), + (b'TOGGLE_FLIGHT_DIRECTOR', "Toggles flight director on/off", "Shared Cockpit"), + (b'SYNC_FLIGHT_DIRECTOR_PITCH', "Synchronizes flight director pitch with current aircraft pitch", "Shared Cockpit"), + (b'INCREASE_AUTOBRAKE_CONTROL', "Increments autobrake level", "Shared Cockpit"), + (b'DECREASE_AUTOBRAKE_CONTROL', "Decrements autobrake level", "Shared Cockpit"), + (b'AP_PANEL_SPEED_HOLD_TOGGLE', "Turns airspeed hold mode on with current airspeed", "Shared Cockpit"), + (b'Unsupported', "Sets airspeed reference to current airspeed", "Shared Cockpit"), + (b'AP_PANEL_MACH_HOLD_TOGGLE', "Sets mach hold reference to current mach", "Shared Cockpit"), + (b'AP_NAV_SELECT_SET', "Sets the nav (1 or 2), which is used by the Nav hold modes", "Shared Cockpit"), + (b'HEADING_BUG_SELECT', "Selects the heading bug for use with +/-", "Shared Cockpit"), + (b'ALTITUDE_BUG_SELECT', "Selects the altitude reference for use with +/-", "Shared Cockpit"), + (b'VSI_BUG_SELECT', "Selects the vertical speed reference for use with +/-", "Shared Cockpit"), + (b'AIRSPEED_BUG_SELECT', "Selects the airspeed reference for use with +/-", "Shared Cockpit"), + (b'AP_PITCH_REF_INC_UP', "Increments the pitch reference for pitch hold mode", "Shared Cockpit"), + (b'AP_PITCH_REF_INC_DN', "Decrements the pitch reference for pitch hold mode", "Shared Cockpit"), + (b'AP_PITCH_REF_SELECT', "Selects pitch reference for use with +/-", "Shared Cockpit"), + (b'AP_ATT_HOLD', "Toggle attitude hold mode", "Shared Cockpit"), + (b'AP_LOC_HOLD', "Toggles localizer (only), hold mode", "Shared Cockpit"), + (b'AP_APR_HOLD', "Toggles approach hold (localizer and glide-slope),", "Shared Cockpit"), + (b'AP_HDG_HOLD', "Toggles heading hold mode", "Shared Cockpit"), + (b'AP_ALT_HOLD', "Toggles altitude hold mode", "Shared Cockpit"), + (b'AP_WING_LEVELER', "Toggles wing leveler mode", "Shared Cockpit"), + (b'AP_BC_HOLD', "Toggles the backcourse mode for the localizer hold", "Shared Cockpit"), + (b'AP_NAV1_HOLD', "Toggles the nav hold mode", "Shared Cockpit"), + (b'AP_MAX_BANK_INC', "Autopilot max bank angle increment.", "Shared Cockpit"), + (b'AP_MAX_BANK_DEC', "Autopilot max bank angle decrement.", "Shared Cockpit"), + (b'AP_N1_HOLD', "Autopilot, hold the N1 percentage at its current level.", "Shared Cockpit"), + (b'AP_N1_REF_INC', "Increment the autopilot N1 reference.", "Shared Cockpit"), + (b'AP_N1_REF_DEC', "Decrement the autopilot N1 reference.", "Shared Cockpit"), + (b'AP_N1_REF_SET', "Sets the autopilot N1 reference.", "Shared Cockpit"), + (b'FLY_BY_WIRE_ELAC_TOGGLE', "Turn on or off the fly by wire Elevators and Ailerons computer.", "Shared Cockpit"), + (b'FLY_BY_WIRE_FAC_TOGGLE', "Turn on or off the fly by wire Flight Augmentation computer.", "Shared Cockpit"), + (b'FLY_BY_WIRE_SEC_TOGGLE', "Turn on or off the fly by wire Spoilers and Elevators computer.", "Shared Cockpit"), + (b'AP_VS_HOLD', "Toggle VS hold mode", "Shared Cockpit"), + (b'FLIGHT_LEVEL_CHANGE', "Toggle FLC mode", "Shared Cockpit"), + ] + + class __Fuel_System(EventHelper): + list = [ + (b'FUEL_SELECTOR_OFF', "Turns selector 1 to OFF position", "Shared Cockpit"), + (b'FUEL_SELECTOR_ALL', "Turns selector 1 to ALL position", "Shared Cockpit"), + (b'FUEL_SELECTOR_LEFT', "Turns selector 1 to LEFT position (burns from tip then aux then main),", "Shared Cockpit"), + (b'FUEL_SELECTOR_RIGHT', "Turns selector 1 to RIGHT position (burns from tip then aux then main),", "Shared Cockpit"), + (b'FUEL_SELECTOR_LEFT_AUX', "Turns selector 1 to LEFT AUX position", "Shared Cockpit"), + (b'FUEL_SELECTOR_RIGHT_AUX', "Turns selector 1 to RIGHT AUX position", "Shared Cockpit"), + (b'FUEL_SELECTOR_CENTER', "Turns selector 1 to CENTER position", "Shared Cockpit"), + (b'FUEL_SELECTOR_SET', '''Sets selector 1 position (see code list below), + FUEL_TANK_SELECTOR_OFF = 0 + FUEL_TANK_SELECTOR_ALL = 1 + FUEL_TANK_SELECTOR_LEFT = 2 + FUEL_TANK_SELECTOR_RIGHT = 3 + FUEL_TANK_SELECTOR_LEFT_AUX = 4 + FUEL_TANK_SELECTOR_RIGHT_AUX = 5 + FUEL_TANK_SELECTOR_CENTER = 6 + FUEL_TANK_SELECTOR_CENTER2 = 7 + FUEL_TANK_SELECTOR_CENTER3 = 8 + FUEL_TANK_SELECTOR_EXTERNAL1 = 9 + FUEL_TANK_SELECTOR_EXTERNAL2 = 10 + FUEL_TANK_SELECTOR_RIGHT_TIP = 11 + FUEL_TANK_SELECTOR_LEFT_TIP = 12 + FUEL_TANK_SELECTOR_CROSSFEED = 13 + FUEL_TANK_SELECTOR_CROSSFEED_L2R = 14 + FUEL_TANK_SELECTOR_CROSSFEED_R2L = 15 + FUEL_TANK_SELECTOR_BOTH = 16 + FUEL_TANK_SELECTOR_EXTERNAL_ALL = 17 + FUEL_TANK_SELECTOR_ISOLATE = 18''', "Shared Cockpit"), + (b'FUEL_SELECTOR_2_OFF', "Turns selector 2 to OFF position", "Shared Cockpit"), + (b'FUEL_SELECTOR_2_ALL', "Turns selector 2 to ALL position", "Shared Cockpit"), + (b'FUEL_SELECTOR_2_LEFT', "Turns selector 2 to LEFT position (burns from tip then aux then main),", "Shared Cockpit"), + (b'FUEL_SELECTOR_2_RIGHT', "Turns selector 2 to RIGHT position (burns from tip then aux then main),", "Shared Cockpit"), + (b'FUEL_SELECTOR_2_LEFT_AUX', "Turns selector 2 to LEFT AUX position", "Shared Cockpit"), + (b'FUEL_SELECTOR_2_RIGHT_AUX', "Turns selector 2 to RIGHT AUX position", "Shared Cockpit"), + (b'FUEL_SELECTOR_2_CENTER', "Turns selector 2 to CENTER position", "Shared Cockpit"), + (b'FUEL_SELECTOR_2_SET', "Sets selector 2 position (see code list below),", "Shared Cockpit"), + (b'FUEL_SELECTOR_3_OFF', "Turns selector 3 to OFF position", "Shared Cockpit"), + (b'FUEL_SELECTOR_3_ALL', "Turns selector 3 to ALL position", "Shared Cockpit"), + (b'FUEL_SELECTOR_3_LEFT', "Turns selector 3 to LEFT position (burns from tip then aux then main),", "Shared Cockpit"), + (b'FUEL_SELECTOR_3_RIGHT', "Turns selector 3 to RIGHT position (burns from tip then aux then main),", "Shared Cockpit"), + (b'FUEL_SELECTOR_3_LEFT_AUX', "Turns selector 3 to LEFT AUX position", "Shared Cockpit"), + (b'FUEL_SELECTOR_3_RIGHT_AUX', "Turns selector 3 to RIGHT AUX position", "Shared Cockpit"), + (b'FUEL_SELECTOR_3_CENTER', "Turns selector 3 to CENTER position", "Shared Cockpit"), + (b'FUEL_SELECTOR_3_SET', "Sets selector 3 position (see code list below),", "Shared Cockpit"), + (b'FUEL_SELECTOR_4_OFF', "Turns selector 4 to OFF position", "Shared Cockpit"), + (b'FUEL_SELECTOR_4_ALL', "Turns selector 4 to ALL position", "Shared Cockpit"), + (b'FUEL_SELECTOR_4_LEFT', "Turns selector 4 to LEFT position (burns from tip then aux then main),", "Shared Cockpit"), + (b'FUEL_SELECTOR_4_RIGHT', "Turns selector 4 to RIGHT position (burns from tip then aux then main),", "Shared Cockpit"), + (b'FUEL_SELECTOR_4_LEFT_AUX', "Turns selector 4 to LEFT AUX position", "Shared Cockpit"), + (b'FUEL_SELECTOR_4_RIGHT_AUX', "Turns selector 4 to RIGHT AUX position", "Shared Cockpit"), + (b'FUEL_SELECTOR_4_CENTER', "Turns selector 4 to CENTER position", "Shared Cockpit"), + (b'FUEL_SELECTOR_4_SET', "Sets selector 4 position (see code list below),", "Shared Cockpit"), + (b'CROSS_FEED_OPEN', "Opens cross feed valve (when used in conjunction with \"isolate\" tank),", "Shared Cockpit"), + (b'CROSS_FEED_TOGGLE', "Toggles crossfeed valve (when used in conjunction with \"isolate\" tank),", "Shared Cockpit"), + (b'CROSS_FEED_OFF', "Closes crossfeed valve (when used in conjunction with \"isolate\" tank),", "Shared Cockpit"), + (b'FUEL_DUMP_SWITCH_SET', "Set to True or False. The switch can only be set to True if fuel_dump_rate is specified in the aircraft configuration file, which indicates that a fuel dump system exists.", "Shared Cockpit"), + (b'ANTIDETONATION_TANK_VALVE_TOGGLE', "Toggle the antidetonation valve. Pass a value to determine which tank, if there are multiple tanks, to use. Tanks are indexed from 1. Refer to the document Notes on Aircraft Systems.", "Shared Cockpit"), + (b'NITROUS_TANK_VALVE_TOGGLE', "Toggle the nitrous valve. Pass a value to determine which tank, if there are multiple tanks, to use. Tanks are indexed from 1.", "Shared Cockpit"), + (b'REPAIR_AND_REFUEL', "Fully repair and refuel the user aircraft. Ignored if flight realism is enforced.", "Shared Cockpit"), + (b'FUEL_DUMP_TOGGLE', "Turns on or off the fuel dump switch.", "Shared Cockpit"), + (b'REQUEST_FUEL_KEY', "Request a fuel truck. The aircraft must be in a parking spot for this to be successful.", "Shared Cockpit"), + ] + + class __Fuel_Selection_Keys(EventHelper): + list = [ + (b'FUEL_SELECTOR_LEFT_MAIN', "Sets the fuel selector. Fuel will be taken in the order left tip, left aux, then main fuel tanks.", "Shared Cockpit"), + (b'FUEL_SELECTOR_2_LEFT_MAIN', "Sets the fuel selector for engine 2.", "Shared Cockpit"), + (b'FUEL_SELECTOR_3_LEFT_MAIN', "Sets the fuel selector for engine 3.", "Shared Cockpit"), + (b'FUEL_SELECTOR_4_LEFT_MAIN', "Sets the fuel selector for engine 4.", "Shared Cockpit"), + (b'FUEL_SELECTOR_RIGHT_MAIN', "Sets the fuel selector. Fuel will be taken in the order right tip, right aux, then main fuel tanks.", "Shared Cockpit"), + (b'FUEL_SELECTOR_2_RIGHT_MAIN', "Sets the fuel selector for engine 2.", "Shared Cockpit"), + (b'FUEL_SELECTOR_3_RIGHT_MAIN', "Sets the fuel selector for engine 3.", "Shared Cockpit"), + (b'FUEL_SELECTOR_4_RIGHT_MAIN', "Sets the fuel selector for engine 4.", "Shared Cockpit"), + ] + + class __Avionics(EventHelper): + list = [ + (b'XPNDR', "Sequentially selects the transponder digits for use with +/-.", "Shared Cockpit"), + (b'ADF', "Sequentially selects the ADF tuner digits for use with +/-. Follow by KEY_SELECT_2 for ADF 2.", "Shared Cockpit"), + (b'DME', "Selects the DME for use with +/-", "Shared Cockpit"), + (b'COM_RADIO', "Sequentially selects the COM tuner digits for use with +/-. Follow by KEY_SELECT_2 for COM 2.", "All aircraft"), + (b'VOR_OBS', "Sequentially selects the VOR OBS for use with +/-. Follow by KEY_SELECT_2 for VOR 2.", "Shared Cockpit"), + (b'NAV_RADIO', "Sequentially selects the NAV tuner digits for use with +/-. Follow by KEY_SELECT_2 for NAV 2.", "Shared Cockpit"), + (b'COM_RADIO_WHOLE_DEC', "Decrements COM by one MHz", "All aircraft"), + (b'COM_RADIO_WHOLE_INC', "Increments COM by one MHz", "All aircraft"), + (b'COM_RADIO_FRACT_DEC', "Decrements COM by 25 KHz", "All aircraft"), + (b'COM_RADIO_FRACT_INC', "Increments COM by 25 KHz", "All aircraft"), + (b'NAV1_RADIO_WHOLE_DEC', "Decrements Nav 1 by one MHz", "Shared Cockpit"), + (b'NAV1_RADIO_WHOLE_INC', "Increments Nav 1 by one MHz", "Shared Cockpit"), + (b'NAV1_RADIO_FRACT_DEC', "Decrements Nav 1 by 25 KHz", "Shared Cockpit"), + (b'NAV1_RADIO_FRACT_INC', "Increments Nav 1 by 25 KHz", "Shared Cockpit"), + (b'NAV2_RADIO_WHOLE_DEC', "Decrements Nav 2 by one MHz", "Shared Cockpit"), + (b'NAV2_RADIO_WHOLE_INC', "Increments Nav 2 by one MHz", "Shared Cockpit"), + (b'NAV2_RADIO_FRACT_DEC', "Decrements Nav 2 by 25 KHz", "Shared Cockpit"), + (b'NAV2_RADIO_FRACT_INC', "Increments Nav 2 by 25 KHz", "Shared Cockpit"), + (b'ADF_100_INC', "Increments ADF by 100 KHz", "Shared Cockpit"), + (b'ADF_10_INC', "Increments ADF by 10 KHz", "Shared Cockpit"), + (b'ADF_1_INC', "Increments ADF by 1 KHz", "Shared Cockpit"), + (b'XPNDR_1000_INC', "Increments first digit of transponder", "Shared Cockpit"), + (b'XPNDR_100_INC', "Increments second digit of transponder", "Shared Cockpit"), + (b'XPNDR_10_INC', "Increments third digit of transponder", "Shared Cockpit"), + (b'XPNDR_1_INC', "Increments fourth digit of transponder", "Shared Cockpit"), + (b'VOR1_OBI_DEC', "Decrements the VOR 1 OBS setting", "Shared Cockpit"), + (b'VOR1_OBI_INC', "Increments the VOR 1 OBS setting", "Shared Cockpit"), + (b'VOR2_OBI_DEC', "Decrements the VOR 2 OBS setting", "Shared Cockpit"), + (b'VOR2_OBI_INC', "Increments the VOR 2 OBS setting", "Shared Cockpit"), + (b'ADF_100_DEC', "Decrements ADF by 100 KHz", "Shared Cockpit"), + (b'ADF_10_DEC', "Decrements ADF by 10 KHz", "Shared Cockpit"), + (b'ADF_1_DEC', "Decrements ADF by 1 KHz", "Shared Cockpit"), + (b'COM_RADIO_SET', "Sets COM frequency (BCD Hz),", "All aircraft"), + (b'NAV1_RADIO_SET', "Sets NAV 1 frequency (BCD Hz),", "Shared Cockpit"), + (b'NAV2_RADIO_SET', "Sets NAV 2 frequency (BCD Hz),", "Shared Cockpit"), + (b'ADF_SET', "Sets ADF frequency (BCD Hz),", "Shared Cockpit"), + (b'XPNDR_SET', "Sets transponder code (BCD),", "All aircraft"), + (b'VOR1_SET', "Sets OBS 1 (0 to 360),", "Shared Cockpit"), + (b'VOR2_SET', "Sets OBS 2 (0 to 360),", "Shared Cockpit"), + (b'DME1_TOGGLE', "Sets DME display to Nav 1", "Shared Cockpit"), + (b'DME2_TOGGLE', "Sets DME display to Nav 2", "Shared Cockpit"), + (b'RADIO_VOR1_IDENT_DISABLE', "Turns NAV 1 ID off", "Shared Cockpit"), + (b'RADIO_VOR2_IDENT_DISABLE', "Turns NAV 2 ID off", "Shared Cockpit"), + (b'RADIO_DME1_IDENT_DISABLE', "Turns DME 1 ID off", "Shared Cockpit"), + (b'RADIO_DME2_IDENT_DISABLE', "Turns DME 2 ID off", "Shared Cockpit"), + (b'RADIO_ADF_IDENT_DISABLE', "Turns ADF 1 ID off", "Shared Cockpit"), + (b'RADIO_VOR1_IDENT_ENABLE', "Turns NAV 1 ID on", "Shared Cockpit"), + (b'RADIO_VOR2_IDENT_ENABLE', "Turns NAV 2 ID on", "Shared Cockpit"), + (b'RADIO_DME1_IDENT_ENABLE', "Turns DME 1 ID on", "Shared Cockpit"), + (b'RADIO_DME2_IDENT_ENABLE', "Turns DME 2 ID on", "Shared Cockpit"), + (b'RADIO_ADF_IDENT_ENABLE', "Turns ADF 1 ID on", "Shared Cockpit"), + (b'RADIO_VOR1_IDENT_TOGGLE', "Toggles NAV 1 ID", "Shared Cockpit"), + (b'RADIO_VOR2_IDENT_TOGGLE', "Toggles NAV 2 ID", "Shared Cockpit"), + (b'RADIO_DME1_IDENT_TOGGLE', "Toggles DME 1 ID", "Shared Cockpit"), + (b'RADIO_DME2_IDENT_TOGGLE', "Toggles DME 2 ID", "Shared Cockpit"), + (b'RADIO_ADF_IDENT_TOGGLE', "Toggles ADF 1 ID", "Shared Cockpit"), + (b'RADIO_VOR1_IDENT_SET', "Sets NAV 1 ID (on/off),", "Shared Cockpit"), + (b'RADIO_VOR2_IDENT_SET', "Sets NAV 2 ID (on/off),", "Shared Cockpit"), + (b'RADIO_DME1_IDENT_SET', "Sets DME 1 ID (on/off),", "Shared Cockpit"), + (b'RADIO_DME2_IDENT_SET', "Sets DME 2 ID (on/off),", "Shared Cockpit"), + (b'RADIO_ADF_IDENT_SET', "Sets ADF 1 ID (on/off),", "Shared Cockpit"), + (b'ADF_CARD_INC', "Increments ADF card", "Shared Cockpit"), + (b'ADF_CARD_DEC', "Decrements ADF card", "Shared Cockpit"), + (b'ADF_CARD_SET', "Sets ADF card (0-360),", "Shared Cockpit"), + (b'TOGGLE_DME', "Toggles between NAV 1 and NAV 2", "Shared Cockpit"), + (b'AVIONICS_MASTER_SET', "Sets the avionics master switch", "All aircraft"), + (b'TOGGLE_AVIONICS_MASTER', "Toggles the avionics master switch", "All aircraft"), + (b'COM_STBY_RADIO_SET', "Sets COM 1 standby frequency (BCD Hz),", "All aircraft"), + (b'COM_STBY_RADIO_SWAP', "Swaps COM 1 frequency with standby", "All aircraft"), + (b'COM_RADIO_FRACT_DEC_CARRY', "Decrement COM 1 frequency by 25 KHz, and carry when digit wraps", "All aircraft"), + (b'COM_RADIO_FRACT_INC_CARRY', "Increment COM 1 frequency by 25 KHz, and carry when digit wraps", "All aircraft"), + (b'COM2_RADIO_WHOLE_DEC', "Decrement COM 2 frequency by 1 MHz, with no carry when digit wraps", "All aircraft"), + (b'COM2_RADIO_WHOLE_INC', "Increment COM 2 frequency by 1 MHz, with no carry when digit wraps", "All aircraft"), + (b'COM2_RADIO_FRACT_DEC', "Decrement COM 2 frequency by 25 KHz, with no carry when digit wraps", "All aircraft"), + (b'COM2_RADIO_FRACT_DEC_CARRY', "Decrement COM 2 frequency by 25 KHz, and carry when digit wraps", "All aircraft"), + (b'COM2_RADIO_FRACT_INC', "Increment COM 2 frequency by 25 KHz, with no carry when digit wraps", "All aircraft"), + (b'COM2_RADIO_FRACT_INC_CARRY', "Increment COM 2 frequency by 25 KHz, and carry when digit wraps", "All aircraft"), + (b'COM2_RADIO_SET', "Sets COM 2 frequency (BCD Hz),", "All aircraft"), + (b'COM2_STBY_RADIO_SET', "Sets COM 2 standby frequency (BCD Hz),", "All aircraft"), + (b'COM2_RADIO_SWAP', "Swaps COM 2 frequency with standby", "All aircraft"), + (b'NAV1_RADIO_FRACT_DEC_CARRY', "Decrement NAV 1 frequency by 50 KHz, and carry when digit wraps", "Shared Cockpit"), + (b'NAV1_RADIO_FRACT_INC_CARRY', "Increment NAV 1 frequency by 50 KHz, and carry when digit wraps", "Shared Cockpit"), + (b'NAV1_STBY_SET', "Sets NAV 1 standby frequency (BCD Hz),", "Shared Cockpit"), + (b'NAV1_RADIO_SWAP', "Swaps NAV 1 frequency with standby", "Shared Cockpit"), + (b'NAV2_RADIO_FRACT_DEC_CARRY', "Decrement NAV 2 frequency by 50 KHz, and carry when digit wraps", "Shared Cockpit"), + (b'NAV2_RADIO_FRACT_INC_CARRY', "Increment NAV 2 frequency by 50 KHz, and carry when digit wraps", "Shared Cockpit"), + (b'NAV2_STBY_SET', "Sets NAV 2 standby frequency (BCD Hz),", "Shared Cockpit"), + (b'NAV2_RADIO_SWAP', "Swaps NAV 2 frequency with standby", "Shared Cockpit"), + (b'ADF1_RADIO_TENTHS_DEC', "Decrements ADF 1 by 0.1 KHz.", "Shared Cockpit"), + (b'ADF1_RADIO_TENTHS_INC', "Increments ADF 1 by 0.1 KHz.", "Shared Cockpit"), + (b'XPNDR_1000_DEC', "Decrements first digit of transponder", "Shared Cockpit"), + (b'XPNDR_100_DEC', "Decrements second digit of transponder", "Shared Cockpit"), + (b'XPNDR_10_DEC', "Decrements third digit of transponder", "Shared Cockpit"), + (b'XPNDR_1_DEC', "Decrements fourth digit of transponder", "Shared Cockpit"), + (b'XPNDR_DEC_CARRY', "Decrements fourth digit of transponder, and with carry.", "Shared Cockpit"), + (b'XPNDR_INC_CARRY', "Increments fourth digit of transponder, and with carry.", "Shared Cockpit"), + (b'ADF_FRACT_DEC_CARRY', "Decrements ADF 1 frequency by 0.1 KHz, with carry", "Shared Cockpit"), + (b'ADF_FRACT_INC_CARRY', "Increments ADF 1 frequency by 0.1 KHz, with carry", "Shared Cockpit"), + (b'COM1_TRANSMIT_SELECT', "Selects COM 1 to transmit", "All aircraft"), + (b'COM2_TRANSMIT_SELECT', "Selects COM 2 to transmit", "All aircraft"), + (b'COM_RECEIVE_ALL_TOGGLE', "Toggles all COM radios to receive on", "All aircraft"), + (b'COM_RECEIVE_ALL_SET', "Sets whether to receive on all COM radios (1,0),", "All aircraft"), + (b'MARKER_SOUND_TOGGLE', "Toggles marker beacon sound on/off", "Shared Cockpit"), + (b'Unsupported', "Sets marker beacon sound (1, 0),", "Shared Cockpit"), + (b'ADF_COMPLETE_SET', "Sets ADF 1 frequency (BCD Hz),", "Shared Cockpit"), + (b'ADF1_WHOLE_INC', "Increments ADF 1 by 1 KHz, with carry as digits wrap.", "Shared Cockpit"), + (b'ADF1_WHOLE_DEC', "Decrements ADF 1 by 1 KHz, with carry as digits wrap.", "Shared Cockpit"), + (b'ADF2_100_INC', "Increments the ADF 2 frequency 100 digit, with wrapping", "Shared Cockpit"), + (b'ADF2_10_INC', "Increments the ADF 2 frequency 10 digit, with wrapping", "Shared Cockpit"), + (b'ADF2_1_INC', "Increments the ADF 2 frequency 1 digit, with wrapping", "Shared Cockpit"), + (b'ADF2_RADIO_TENTHS_INC', "Increments ADF 2 frequency 1/10 digit, with wrapping", "Shared Cockpit"), + (b'ADF2_100_DEC', "Decrements the ADF 2 frequency 100 digit, with wrapping", "Shared Cockpit"), + (b'ADF2_10_DEC', "Decrements the ADF 2 frequency 10 digit, with wrapping", "Shared Cockpit"), + (b'ADF2_1_DEC', "Decrements the ADF 2 frequency 1 digit, with wrapping", "Shared Cockpit"), + (b'ADF2_RADIO_TENTHS_DEC', "Decrements ADF 2 frequency 1/10 digit, with wrapping", "Shared Cockpit"), + (b'ADF2_WHOLE_INC', "Increments ADF 2 by 1 KHz, with carry as digits wrap.", "Shared Cockpit"), + (b'ADF2_WHOLE_DEC', "Decrements ADF 2 by 1 KHz, with carry as digits wrap.", "Shared Cockpit"), + (b'ADF2_FRACT_DEC_CARRY', "Decrements ADF 2 frequency by 0.1 KHz, with carry", "Shared Cockpit"), + (b'ADF2_FRACT_INC_CARRY', "Increments ADF 2 frequency by 0.1 KHz, with carry", "Shared Cockpit"), + (b'ADF2_COMPLETE_SET', "Sets ADF 1 frequency (BCD Hz),", "Shared Cockpit"), + (b'RADIO_ADF2_IDENT_DISABLE', "Turns ADF 2 ID off", "Shared Cockpit"), + (b'RADIO_ADF2_IDENT_ENABLE', "Turns ADF 2 ID on", "Shared Cockpit"), + (b'RADIO_ADF2_IDENT_TOGGLE', "Toggles ADF 2 ID", "Shared Cockpit"), + (b'RADIO_ADF2_IDENT_SET', "Sets ADF 2 ID on/off (1,0),", "Shared Cockpit"), + (b'FREQUENCY_SWAP', "Swaps frequency with standby on whichever NAV or COM radio is selected.", "Shared Cockpit"), + (b'TOGGLE_GPS_DRIVES_NAV1', "Toggles between GPS and NAV 1 driving NAV 1 OBS display (and AP),", "Shared Cockpit"), + (b'GPS_POWER_BUTTON', "Toggles power button", "Shared Cockpit"), + (b'GPS_NEAREST_BUTTON', "Selects Nearest Airport Page", "Shared Cockpit"), + (b'GPS_OBS_BUTTON', "Toggles automatic sequencing of waypoints", "Shared Cockpit"), + (b'GPS_MSG_BUTTON', "Toggles the Message Page", "Shared Cockpit"), + (b'GPS_MSG_BUTTON_DOWN', "Triggers the pressing of the message button.", "Shared Cockpit"), + (b'GPS_MSG_BUTTON_UP', "Triggers the release of the message button", "Shared Cockpit"), + (b'GPS_FLIGHTPLAN_BUTTON', "Displays the programmed flightplan.", "Shared Cockpit"), + (b'GPS_TERRAIN_BUTTON', "Displays terrain information on default display", "Shared Cockpit"), + (b'GPS_PROCEDURE_BUTTON', "Displays the approach procedure page.", "Shared Cockpit"), + (b'GPS_ZOOMIN_BUTTON', "Zooms in default display", "Shared Cockpit"), + (b'GPS_ZOOMOUT_BUTTON', "Zooms out default display", "Shared Cockpit"), + (b'GPS_DIRECTTO_BUTTON', "Brings up the \"Direct To\" page", "Shared Cockpit"), + (b'GPS_MENU_BUTTON', "Brings up page to select active legs in a flightplan.", "Shared Cockpit"), + (b'GPS_CLEAR_BUTTON', "Clears entered data on a page", "Shared Cockpit"), + (b'GPS_CLEAR_ALL_BUTTON', "Clears all data immediately", "Shared Cockpit"), + (b'GPS_CLEAR_BUTTON_DOWN', "Triggers the pressing of the Clear button", "Shared Cockpit"), + (b'GPS_CLEAR_BUTTON_UP', "Triggers the release of the Clear button.", "Shared Cockpit"), + (b'GPS_ENTER_BUTTON', "Approves entered data.", "Shared Cockpit"), + (b'GPS_CURSOR_BUTTON', "Selects GPS cursor", "Shared Cockpit"), + (b'GPS_GROUP_KNOB_INC', "Increments cursor", "Shared Cockpit"), + (b'GPS_GROUP_KNOB_DEC', "Decrements cursor", "Shared Cockpit"), + (b'GPS_PAGE_KNOB_INC', "Increments through pages", "Shared Cockpit"), + (b'GPS_PAGE_KNOB_DEC', "Decrements through pages", "Shared Cockpit"), + (b'DME_SELECT', "Selects one of the two DME systems (1,2),.", "Shared Cockpit"), + (b'RADIO_SELECTED_DME_IDENT_ENABLE', "Turns on the identification sound for the selected DME.", "Shared Cockpit"), + (b'RADIO_SELECTED_DME_IDENT_DISABLE', "Turns off the identification sound for the selected DME.", "Shared Cockpit"), + (b'RADIO_SELECTED_DME_IDENT_SET', "Sets the DME identification sound to the given filename.", "Shared Cockpit"), + (b'RADIO_SELECTED_DME_IDENT_TOGGLE', "Turns on or off the identification sound for the selected DME.", "Shared Cockpit"), + ] + + class __Instruments(EventHelper): + list = [ + (b'EGT', "Selects EGT bug for +/-", "Shared Cockpit"), + (b'EGT_INC', "Increments EGT bugs", "Shared Cockpit"), + (b'EGT_DEC', "Decrements EGT bugs", "Shared Cockpit"), + (b'EGT_SET', "Sets EGT bugs (0 to 32767),", "Shared Cockpit"), + (b'BAROMETRIC', "Syncs altimeter setting to sea level pressure, or 29.92 if above 18000 feet", "Shared Cockpit"), + (b'GYRO_DRIFT_INC', "Increments heading indicator", "Shared Cockpit"), + (b'GYRO_DRIFT_DEC', "Decrements heading indicator", "Shared Cockpit"), + (b'KOHLSMAN_INC', "Increments altimeter setting", "Shared Cockpit"), + (b'KOHLSMAN_DEC', "Decrements altimeter setting", "Shared Cockpit"), + (b'KOHLSMAN_SET', "Sets altimeter setting (Millibars * 16),", "Shared Cockpit"), + (b'TRUE_AIRSPEED_CAL_INC', "Increments airspeed indicators true airspeed reference card", "Shared Cockpit"), + (b'TRUE_AIRSPEED_CAL_DEC', "Decrements airspeed indicators true airspeed reference card", "Shared Cockpit"), + (b'TRUE_AIRSPEED_CAL_SET', "Sets airspeed indicators true airspeed reference card (degrees, where 0 is standard sea level conditions),", "Shared Cockpit"), + (b'EGT1_INC', "Increments EGT bug 1", "Shared Cockpit"), + (b'EGT1_DEC', "Decrements EGT bug 1", "Shared Cockpit"), + (b'EGT1_SET', "Sets EGT bug 1 (0 to 32767),", "Shared Cockpit"), + (b'EGT2_INC', "Increments EGT bug 2", "Shared Cockpit"), + (b'EGT2_DEC', "Decrements EGT bug 2", "Shared Cockpit"), + (b'EGT2_SET', "Sets EGT bug 2 (0 to 32767),", "Shared Cockpit"), + (b'EGT3_INC', "Increments EGT bug 3", "Shared Cockpit"), + (b'EGT3_DEC', "Decrements EGT bug 3", "Shared Cockpit"), + (b'EGT3_SET', "Sets EGT bug 3 (0 to 32767),", "Shared Cockpit"), + (b'EGT4_INC', "Increments EGT bug 4", "Shared Cockpit"), + (b'EGT4_DEC', "Decrements EGT bug 4", "Shared Cockpit"), + (b'EGT4_SET', "Sets EGT bug 4 (0 to 32767),", "Shared Cockpit"), + (b'ATTITUDE_BARS_POSITION_UP', "Increments attitude indicator pitch reference bars", "Shared Cockpit"), + (b'ATTITUDE_BARS_POSITION_DOWN', "Decrements attitude indicator pitch reference bars", "Shared Cockpit"), + (b'ATTITUDE_CAGE_BUTTON', "Cages attitude indicator at 0 pitch and bank", "Shared Cockpit"), + (b'RESET_G_FORCE_INDICATOR', "Resets max/min indicated G force to 1.0.", "Shared Cockpit"), + (b'RESET_MAX_RPM_INDICATOR', "Reset max indicated engine rpm to 0.", "Shared Cockpit"), + (b'HEADING_GYRO_SET', "Sets heading indicator to 0 drift error.", "Shared Cockpit"), + (b'GYRO_DRIFT_SET', "Sets heading indicator drift angle (degrees),.", "Shared Cockpit"), + ] + + class __Lights(EventHelper): + list = [ + (b'STROBES_TOGGLE', "Toggle strobe lights ", "All aircraft"), + (b'ALL_LIGHTS_TOGGLE', "Toggle all lights", "Shared Cockpit"), + (b'PANEL_LIGHTS_TOGGLE', "Toggle panel lights", "All aircraft"), + (b'LANDING_LIGHTS_TOGGLE', "Toggle landing lights", "All aircraft"), + (b'LANDING_LIGHT_UP', "Rotate landing light up", "Shared Cockpit"), + (b'LANDING_LIGHT_DOWN', "Rotate landing light down", "Shared Cockpit"), + (b'LANDING_LIGHT_LEFT', "Rotate landing light left", "Shared Cockpit"), + (b'LANDING_LIGHT_RIGHT', "Rotate landing light right", "Shared Cockpit"), + (b'LANDING_LIGHT_HOME', "Return landing light to default position", "Shared Cockpit"), + (b'STROBES_ON', "Turn strobe lights on", "All aircraft"), + (b'STROBES_OFF', "Turn strobe light off", "All aircraft"), + (b'STROBES_SET', "Set strobe lights on/off (1,0),", "All aircraft"), + (b'PANEL_LIGHTS_ON', "Turn panel lights on", "All aircraft"), + (b'PANEL_LIGHTS_OFF', "Turn panel lights off", "All aircraft"), + (b'PANEL_LIGHTS_SET', "Set panel lights on/off (1,0),", "All aircraft"), + (b'LANDING_LIGHTS_ON', "Turn landing lights on", "All aircraft"), + (b'LANDING_LIGHTS_OFF', "Turn landing lights off", "All aircraft"), + (b'LANDING_LIGHTS_SET', "Set landing lights on/off (1,0),", "All aircraft"), + (b'TOGGLE_BEACON_LIGHTS', "Toggle beacon lights", "All aircraft"), + (b'TOGGLE_TAXI_LIGHTS', "Toggle taxi lights", "All aircraft"), + (b'TOGGLE_LOGO_LIGHTS', "Toggle logo lights", "All aircraft"), + (b'TOGGLE_RECOGNITION_LIGHTS', "Toggle recognition lights", "All aircraft"), + (b'TOGGLE_WING_LIGHTS', "Toggle wing lights", "All aircraft"), + (b'TOGGLE_NAV_LIGHTS', "Toggle navigation lights", "All aircraft"), + (b'TOGGLE_CABIN_LIGHTS', "Toggle cockpit/cabin lights", "All aircraft"), + ] + + class __Failures(EventHelper): + list = [ + (b'TOGGLE_VACUUM_FAILURE', "Toggle vacuum system failure", "Shared Cockpit"), + (b'TOGGLE_ELECTRICAL_FAILURE', "Toggle electrical system failure", "Shared Cockpit"), + (b'TOGGLE_PITOT_BLOCKAGE', "Toggles blocked pitot tube", "Shared Cockpit"), + (b'TOGGLE_STATIC_PORT_BLOCKAGE', " Toggles blocked static port", "Shared Cockpit"), + (b'TOGGLE_HYDRAULIC_FAILURE', "Toggles hydraulic system failure", "Shared Cockpit"), + (b'TOGGLE_TOTAL_BRAKE_FAILURE', "Toggles brake failure (both),", "Shared Cockpit"), + (b'TOGGLE_LEFT_BRAKE_FAILURE', "Toggles left brake failure", "Shared Cockpit"), + (b'TOGGLE_RIGHT_BRAKE_FAILURE', "Toggles right brake failure", "Shared Cockpit"), + (b'TOGGLE_ENGINE1_FAILURE', "Toggle engine 1 failure", "Shared Cockpit"), + (b'TOGGLE_ENGINE2_FAILURE', "Toggle engine 2 failure", "Shared Cockpit"), + (b'TOGGLE_ENGINE3_FAILURE', "Toggle engine 3 failure", "Shared Cockpit"), + (b'TOGGLE_ENGINE4_FAILURE', "Toggle engine 4 failure", "Shared Cockpit"), + ] + + class __Miscellaneous_Systems(EventHelper): + list = [ + (b'SMOKE_TOGGLE', "Toggle smoke system switch", "All aircraft"), + (b'GEAR_TOGGLE', "Toggle gear handle", "All aircraft"), + (b'BRAKES', "Increment brake pressure ", "Shared Cockpit"), + (b'GEAR_SET', "Sets gear handle position up/down (0,1),", "All aircraft"), + (b'BRAKES_LEFT', "Increments left brake pressure", "Shared Cockpit"), + (b'BRAKES_RIGHT', "Increments right brake pressure", "Shared Cockpit"), + (b'PARKING_BRAKES', "Toggles parking brake on/off", "Shared Cockpit"), + (b'GEAR_PUMP', "Increments emergency gear extension", "Shared Cockpit"), + (b'PITOT_HEAT_TOGGLE', "Toggles pitot heat switch", "All aircraft"), + (b'SMOKE_ON', "Turns smoke system on", "All aircraft"), + (b'SMOKE_OFF', "Turns smoke system off", "All aircraft"), + (b'SMOKE_SET', "Sets smoke system on/off (1,0),", "All aircraft"), + (b'PITOT_HEAT_ON', "Turns pitot heat switch on", "Shared Cockpit"), + (b'PITOT_HEAT_OFF', "Turns pitot heat switch off", "Shared Cockpit"), + (b'PITOT_HEAT_SET', "Sets pitot heat switch on/off (1,0),", "Shared Cockpit"), + (b'GEAR_UP', "Sets gear handle in UP position", "All aircraft"), + (b'GEAR_DOWN', "Sets gear handle in DOWN position", "All aircraft"), + (b'TOGGLE_MASTER_BATTERY', "Toggles main battery switch", "All aircraft"), + (b'TOGGLE_MASTER_ALTERNATOR', "Toggles main alternator/generator switch", "All aircraft"), + (b'TOGGLE_ELECTRIC_VACUUM_PUMP', "Toggles backup electric vacuum pump", "Shared Cockpit"), + (b'TOGGLE_ALTERNATE_STATIC', "Toggles alternate static pressure port", "All aircraft"), + (b'DECREASE_DECISION_HEIGHT', "Decrements decision height reference", "Shared Cockpit"), + (b'INCREASE_DECISION_HEIGHT', "Increments decision height reference", "Shared Cockpit"), + (b'TOGGLE_STRUCTURAL_DEICE', "Toggles structural deice switch", "Shared Cockpit"), + (b'TOGGLE_PROPELLER_DEICE', "Toggles propeller deice switch", "Shared Cockpit"), + (b'TOGGLE_ALTERNATOR1', "Toggles alternator/generator 1 switch", "All aircraft"), + (b'TOGGLE_ALTERNATOR2', "Toggles alternator/generator 2 switch", "All aircraft"), + (b'TOGGLE_ALTERNATOR3', "Toggles alternator/generator 3 switch", "All aircraft"), + (b'TOGGLE_ALTERNATOR4', "Toggles alternator/generator 4 switch", "All aircraft"), + (b'TOGGLE_MASTER_BATTERY_ALTERNATOR', "Toggles master battery and alternator switch", "Shared Cockpit"), + (b'AXIS_LEFT_BRAKE_SET', "Sets left brake position from axis controller (e.g. joystick),. -16383 (0 brakes) to +16383 (max brakes)", "Shared Cockpit"), + (b'AXIS_RIGHT_BRAKE_SET', "Sets right brake position from axis controller (e.g. joystick),. -16383 (0 brakes) to +16383 (max brakes)", "Shared Cockpit"), + (b'TOGGLE_AIRCRAFT_EXIT', "Toggles primary door open/close. Follow by KEY_SELECT_2, etc for subsequent doors.", "Shared Cockpit"), + (b'TOGGLE_WING_FOLD', "Toggles wing folding", "Shared Cockpit"), + (b'SET_WING_FOLD', '''Sets the wings into the folded position suitable for storage, typically on a carrier. Takes a value: + 1 - fold wings, + 0 - unfold wings''', "Shared Cockpit"), + (b'TOGGLE_TAIL_HOOK_HANDLE', "Toggles tail hook", "Shared Cockpit"), + (b'SET_TAIL_HOOK_HANDLE', '''Sets the tail hook handle. Takes a value: + 1 - set tail hook, + 0 - retract tail hook''', "Shared Cockpit"), + (b'TOGGLE_WATER_RUDDER', "Toggles water rudders", "Shared Cockpit"), + (b'TOGGLE_PUSHBACK', "Toggles pushback.", "Shared Cockpit"), + (b'KEY_TUG_HEADING', "Triggers tug and sets the desired heading. The units are a 32 bit integer (0 to 4294967295), which represent 0 to 360 degrees. To set a 45 degree angle, for example, set the value to 4294967295 / 8.", "Shared Cockpit"), + (b'KEY_TUG_SPEED', "Triggers tug, and sets desired speed, in feet per second. The speed can be both positive (forward movement), and negative (backward movement).", "Shared Cockpit"), + (b'TUG_DISABLE', "Disables tug", "Shared Cockpit"), + (b'TOGGLE_MASTER_IGNITION_SWITCH', "Toggles master ignition switch", "Shared Cockpit"), + (b'TOGGLE_TAILWHEEL_LOCK', "Toggles tail wheel lock", "Shared Cockpit"), + (b'ADD_FUEL_QUANTITY', "Adds fuel to the aircraft, 25% of capacity by default. 0 to 65535 (max fuel), can be passed.", "Shared Cockpit"), + (b'TOW_PLANE_RELEASE', "Release a towed aircraft, usually a glider.", "Shared Cockpit"), + (b'TOW_PLANE_REQUEST', "Request a tow plane. The user aircraft must be tow-able, stationary, on the ground and not already attached for this to succeed.", "Shared Cockpit"), + (b'RELEASE_DROPPABLE_OBJECTS', "Release one droppable object. Multiple key events will release multiple objects.", "Shared Cockpit"), + (b'RETRACT_FLOAT_SWITCH_DEC', "If the plane has retractable floats, moves the retract position from Extend to Neutral, or Neutral to Retract.", "Shared Cockpit"), + (b'RETRACT_FLOAT_SWITCH_INC', "If the plane has retractable floats, moves the retract position from Retract to Neutral, or Neutral to Extend.", "Shared Cockpit"), + (b'TOGGLE_WATER_BALLAST_VALVE', "Turn the water ballast valve on or off.", "Shared Cockpit"), + (b'TOGGLE_VARIOMETER_SWITCH', "Turn the variometer on or off.", "Shared Cockpit"), + (b'TOGGLE_TURN_INDICATOR_SWITCH', "Turn the turn indicator on or off.", "Shared Cockpit"), + (b'APU_STARTER', "Start up the auxiliary power unit (APU),.", "Shared Cockpit"), + (b'APU_OFF_SWITCH', "Turn the APU off.", "Shared Cockpit"), + (b'APU_GENERATOR_SWITCH_TOGGLE', "Turn the auxiliary generator on or off.", "Shared Cockpit"), + (b'APU_GENERATOR_SWITCH_SET', "Set the auxiliary generator switch (0,1),.", "Shared Cockpit"), + (b'EXTINGUISH_ENGINE_FIRE', "Takes a two digit argument. The first digit represents the fire extinguisher index, and the second represents the engine index. For example, 11 would represent using bottle 1 on engine 1. 21 would represent using bottle 2 on engine 1. Typical entries for a twin engine aircraft would be 11 and 22.", "Shared Cockpit"), + (b'HYDRAULIC_SWITCH_TOGGLE', "Turn the hydraulic switch on or off.", "Shared Cockpit"), + (b'BLEED_AIR_SOURCE_CONTROL_INC', "Increases the bleed air source control.", "Shared Cockpit"), + (b'BLEED_AIR_SOURCE_CONTROL_DEC', "Decreases the bleed air source control.", "Shared Cockpit"), + (b'BLEED_AIR_SOURCE_CONTROL_SET', '''Set to one of: + 0: auto + 1: off + 2: apu + 3: engines''', "Shared Cockpit"), + (b'TURBINE_IGNITION_SWITCH_TOGGLE', "Turn the turbine ignition switch on or off.", "Shared Cockpit"), + (b'CABIN_NO_SMOKING_ALERT_SWITCH_TOGGLE', "Turn the \"No smoking\" alert on or off.", "Shared Cockpit"), + (b'CABIN_SEATBELTS_ALERT_SWITCH_TOGGLE', "Turn the \"Fasten seatbelts\" alert on or off.", "Shared Cockpit"), + (b'ANTISKID_BRAKES_TOGGLE', "Turn the anti-skid braking system on or off.", "Shared Cockpit"), + (b'GPWS_SWITCH_TOGGLE', "Turn the g round proximity warning system (GPWS), on or off.", "Shared Cockpit"), + (b'MANUAL_FUEL_PRESSURE_PUMP', "Activate the manual fuel pressure pump.", "Shared Cockpit"), + ] + + class __Nose_wheel_steering(EventHelper): + list = [ + (b'STEERING_INC', "Increments the nose wheel steering position by 5 percent.", "Shared Cockpit"), + (b'STEERING_DEC', "Decrements the nose wheel steering position by 5 percent.", "Shared Cockpit"), + (b'STEERING_SET', "Sets the value of the nose wheel steering position. Zero is straight ahead (-16383, far left +16383, far right),.", "Shared Cockpit"), + ] + + class __Cabin_pressurization(EventHelper): + list = [ + (b'KEY_PRESSURIZATION_PRESSURE_ALT_INC', "Increases the altitude that the cabin is pressurized to.", "Shared Cockpit"), + (b'KEY_PRESSURIZATION_PRESSURE_ALT_DEC', "Decreases the altitude that the cabin is pressurized to.", "Shared Cockpit"), + (b'PRESSURIZATION_CLIMB_RATE_INC', "Sets the rate at which cabin pressurization is increased.", "Shared Cockpit"), + (b'PRESSURIZATION_CLIMB_RATE_DEC', "Sets the rate at which cabin pressurization is decreased.", "Shared Cockpit"), + (b'PRESSURIZATION_PRESSURE_DUMP_SWTICH', '''Sets the cabin pressure to the outside air pressure.''', "Shared Cockpit"), + ] + + class __Catapult_Launches(EventHelper): + list = [ + (b'TAKEOFF_ASSIST_ARM_TOGGLE', "Deploy or remove the assist arm. Refer to the document Notes on Aircraft Systems.", "Shared Cockpit"), + (b'TAKEOFF_ASSIST_ARM_SET', '''Value: + TRUE request set + FALSE request unset''', "Shared Cockpit"), + (b'TAKEOFF_ASSIST_FIRE', "If everything is set up correctly. Launch from the catapult.", "Shared Cockpit"), + (b'TOGGLE_LAUNCH_BAR_SWITCH', "Toggle the request for the launch bar to be installed or removed.", "Shared Cockpit"), + (b'SET_LAUNCH_BAR_SWITCH', '''Value: + TRUE request set + FALSE request unset''', "Shared Cockpit"), + ] + + class __Helicopter_Specific_Systems(EventHelper): + list = [ + (b'ROTOR_BRAKE', " Triggers rotor braking input", "Shared Cockpit"), + (b'ROTOR_CLUTCH_SWITCH_TOGGLE', "Toggles on electric rotor clutch switch", "Shared Cockpit"), + (b'ROTOR_CLUTCH_SWITCH_SET', "Sets electric rotor clutch switch on/off (1,0),", "Shared Cockpit"), + (b'ROTOR_GOV_SWITCH_TOGGLE', "Toggles the electric rotor governor switch", "Shared Cockpit"), + (b'ROTOR_GOV_SWITCH_SET', "Sets the electric rotor governor switch on/off (1,0),", "Shared Cockpit"), + (b'ROTOR_LATERAL_TRIM_INC', "Increments the lateral (right), rotor trim", "Shared Cockpit"), + (b'ROTOR_LATERAL_TRIM_DEC', "Decrements the lateral (right), rotor trim", "Shared Cockpit"), + (b'ROTOR_LATERAL_TRIM_SET', "Sets the lateral (right), rotor trim (0 to 16383)", "Shared Cockpit"), + ] + + class __Slings_and_Hoists(EventHelper): + list = [ + (b'SLING_PICKUP_RELEASE', "Toggle between pickup and release mode. Hold mode is automatic and cannot be selected. Refer to the document Notes on Aircraft Systems.", "Shared Cockpit"), + (b'HOIST_SWITCH_EXTEND', "The rate at which a hoist cable extends is set in the Aircraft Configuration File.", "Shared Cockpit"), + (b'HOIST_SWITCH_RETRACT', "The rate at which a hoist cable retracts is set in the Aircraft Configuration File.", "Shared Cockpit"), + (b'HOIST_SWITCH_SET', '''The data value should be set to one of: + <0 up + =0 off + >0 down''', "Shared Cockpit"), + (b'HOIST_DEPLOY_TOGGLE', "Toggles the hoist arm switch, extend or retract.", "Shared Cockpit"), + (b'HOIST_DEPLOY_SET', '''The data value should be set to: + 0 - set hoist switch to retract the arm + 1 - set hoist switch to extend the arm''', "Shared Cockpit"), + ] + + class __Slew_System(EventHelper): + list = [ + (b'SLEW_TOGGLE', "Toggles slew on/off", "Shared Cockpit (Pilot only),"), + (b'SLEW_OFF', "Turns slew off", "Shared Cockpit (Pilot only),"), + (b'SLEW_ON', "Turns slew on", "Shared Cockpit (Pilot only),"), + (b'SLEW_SET', "Sets slew on/off (1,0),", "Shared Cockpit (Pilot only)"), + (b'SLEW_RESET', "Stop slew and reset pitch, bank, and heading all to zero.", "Shared Cockpit (Pilot only),"), + (b'SLEW_ALTIT_UP_FAST', "Slew upward fast", "Shared Cockpit (Pilot only),"), + (b'SLEW_ALTIT_UP_SLOW', "Slew upward slow", "Shared Cockpit (Pilot only),"), + (b'SLEW_ALTIT_FREEZE', "Stop vertical slew", "Shared Cockpit (Pilot only),"), + (b'SLEW_ALTIT_DN_SLOW', "Slew downward slow", "Shared Cockpit (Pilot only),"), + (b'SLEW_ALTIT_DN_FAST', "Slew downward fast", "Shared Cockpit (Pilot only),"), + (b'SLEW_ALTIT_PLUS', "Increase upward slew", "Shared Cockpit (Pilot only),"), + (b'SLEW_ALTIT_MINUS', "Decrease upward slew ", "Shared Cockpit (Pilot only),"), + (b'SLEW_PITCH_DN_FAST', "Slew pitch downward fast", "Shared Cockpit (Pilot only),"), + (b'SLEW_PITCH_DN_SLOW', "Slew pitch downward slow", "Shared Cockpit (Pilot only),"), + (b'SLEW_PITCH_FREEZE', "Stop pitch slew", "Shared Cockpit (Pilot only),"), + (b'SLEW_PITCH_UP_SLOW', "Slew pitch up slow", "Shared Cockpit (Pilot only),"), + (b'SLEW_PITCH_UP_FAST', "Slew pitch upward fast", "Shared Cockpit (Pilot only),"), + (b'SLEW_PITCH_PLUS', "Increase pitch up slew", "Shared Cockpit (Pilot only),"), + (b'SLEW_PITCH_MINUS', "Decrease pitch up slew", "Shared Cockpit (Pilot only),"), + (b'SLEW_BANK_MINUS', "Increase left bank slew", "Shared Cockpit (Pilot only),"), + (b'SLEW_AHEAD_PLUS', "Increase forward slew", "Shared Cockpit (Pilot only),"), + (b'SLEW_BANK_PLUS', "Increase right bank slew", "Shared Cockpit (Pilot only),"), + (b'SLEW_LEFT', "Slew to the left", "Shared Cockpit (Pilot only),"), + (b'SLEW_FREEZE', "Stop all slew", "Shared Cockpit (Pilot only),"), + (b'SLEW_RIGHT', "Slew to the right", "Shared Cockpit (Pilot only),"), + (b'SLEW_HEADING_MINUS', "Increase slew heading to the left", "Shared Cockpit (Pilot only),"), + (b'SLEW_AHEAD_MINUS', "Decrease forward slew", "Shared Cockpit (Pilot only),"), + (b'SLEW_HEADING_PLUS', "Increase slew heading to the right", "Shared Cockpit (Pilot only),"), + (b'AXIS_SLEW_AHEAD_SET', "Sets forward slew (+/- 16383),", "Shared Cockpit (Pilot only)"), + (b'AXIS_SLEW_SIDEWAYS_SET', "Sets sideways slew (+/- 16383),", "Shared Cockpit (Pilot only)"), + (b'AXIS_SLEW_HEADING_SET', "Sets heading slew (+/- 16383),", "Shared Cockpit (Pilot only)"), + (b'AXIS_SLEW_ALT_SET', "Sets vertical slew (+/- 16383),", "Shared Cockpit (Pilot only)"), + (b'AXIS_SLEW_BANK_SET', "Sets roll slew (+/- 16383),", "Shared Cockpit (Pilot only)"), + (b'AXIS_SLEW_PITCH_SET', "Sets pitch slew (+/- 16383),", "Shared Cockpit (Pilot only)"), + ] + + class __View_System(EventHelper): + list = [ + (b'VIEW_MODE', "Selects next view", "Shared Cockpit"), + (b'VIEW_WINDOW_TO_FRONT', "Sets active window to front", "Shared Cockpit"), + (b'VIEW_RESET', "Resets the view to the default", "Shared Cockpit"), + (b'VIEW_ALWAYS_PAN_UP', " ", "Shared Cockpit"), + (b'VIEW_ALWAYS_PAN_DOWN', " ", "Shared Cockpit"), + (b'NEXT_SUB_VIEW', " ", "Shared Cockpit"), + (b'PREV_SUB_VIEW', " ", "Shared Cockpit"), + (b'VIEW_TRACK_PAN_TOGGLE', " ", "Shared Cockpit"), + (b'VIEW_PREVIOUS_TOGGLE', " ", "Shared Cockpit"), + (b'VIEW_CAMERA_SELECT_START', " ", "Shared Cockpit"), + (b'PANEL_HUD_NEXT', " ", "Shared Cockpit"), + (b'PANEL_HUD_PREVIOUS', " ", "Shared Cockpit"), + (b'ZOOM_IN', "Zooms view in", "Shared Cockpit"), + (b'ZOOM_OUT', "Zooms view out", "Shared Cockpit"), + (b'MAP_ZOOM_FINE_IN', "Fine zoom in map view", "Shared Cockpit"), + (b'PAN_LEFT', "Pans view left", "Shared Cockpit"), + (b'PAN_RIGHT', "Pans view right", "Shared Cockpit"), + (b'MAP_ZOOM_FINE_OUT', "Fine zoom out in map view", "Shared Cockpit"), + (b'VIEW_FORWARD', "Sets view direction forward", "Shared Cockpit"), + (b'VIEW_FORWARD_RIGHT', "Sets view direction forward and right", "Shared Cockpit"), + (b'VIEW_RIGHT', "Sets view direction to the right", "Shared Cockpit"), + (b'VIEW_REAR_RIGHT', "Sets view direction to the rear and right", "Shared Cockpit"), + (b'VIEW_REAR', "Sets view direction to the rear", "Shared Cockpit"), + (b'VIEW_REAR_LEFT', "Sets view direction to the rear and left", "Shared Cockpit"), + (b'VIEW_LEFT', "Sets view direction to the left", "Shared Cockpit"), + (b'VIEW_FORWARD_LEFT', "Sets view direction forward and left", "Shared Cockpit"), + (b'VIEW_DOWN', "Sets view direction down", "Shared Cockpit"), + (b'ZOOM_MINUS', "Decreases zoom", "Shared Cockpit"), + (b'ZOOM_PLUS', "Increase zoom", "Shared Cockpit"), + (b'PAN_UP', "Pan view up", "Shared Cockpit"), + (b'PAN_DOWN', "Pan view down", "Shared Cockpit"), + (b'VIEW_MODE_REV', "Reverse view cycle", "Shared Cockpit"), + (b'ZOOM_IN_FINE', "Zoom in fine", "Shared Cockpit"), + (b'ZOOM_OUT_FINE', "Zoom out fine", "Shared Cockpit"), + (b'CLOSE_VIEW', "Close current view", "Shared Cockpit"), + (b'NEW_VIEW', "Open new view", "Shared Cockpit"), + (b'NEXT_VIEW', "Select next view", "Shared Cockpit"), + (b'PREV_VIEW', "Select previous view", "Shared Cockpit"), + (b'PAN_LEFT_UP', "Pan view left", "Shared Cockpit"), + (b'PAN_LEFT_DOWN', "Pan view left and down", "Shared Cockpit"), + (b'PAN_RIGHT_UP', "Pan view right and up", "Shared Cockpit"), + (b'PAN_RIGHT_DOWN', "Pan view right and down", "Shared Cockpit"), + (b'PAN_TILT_LEFT', "Tilt view left", "Shared Cockpit"), + (b'PAN_TILT_RIGHT', "Tilt view right", "Shared Cockpit"), + (b'PAN_RESET', "Reset view to forward", "Shared Cockpit"), + (b'VIEW_FORWARD_UP', "Sets view forward and up", "Shared Cockpit"), + (b'VIEW_FORWARD_RIGHT_UP', "Sets view forward, right, and up", "Shared Cockpit"), + (b'VIEW_RIGHT_UP', "Sets view right and up", "Shared Cockpit"), + (b'VIEW_REAR_RIGHT_UP', "Sets view rear, right, and up", "Shared Cockpit"), + (b'VIEW_REAR_UP', "Sets view rear and up", "Shared Cockpit"), + (b'VIEW_REAR_LEFT_UP', "Sets view rear left and up", "Shared Cockpit"), + (b'VIEW_LEFT_UP', "Sets view left and up", "Shared Cockpit"), + (b'VIEW_FORWARD_LEFT_UP', "Sets view forward left and up", "Shared Cockpit"), + (b'VIEW_UP', "Sets view up", "Shared Cockpit"), + (b'VIEW_RESET', "Reset view forward", "Shared Cockpit"), + (b'PAN_RESET_COCKPIT', "Reset panning to forward, if in cockpit view", "Shared Cockpit"), + (b'KEY_CHASE_VIEW_NEXT', "Cycle view to next target", "Shared Cockpit"), + (b'KEY_CHASE_VIEW_PREV', "Cycle view to previous target", "Shared Cockpit"), + (b'CHASE_VIEW_TOGGLE', "Toggles chase view on/off", "Shared Cockpit"), + (b'EYEPOINT_UP', "Move eyepoint up", "Shared Cockpit"), + (b'EYEPOINT_DOWN', "Move eyepoint down", "Shared Cockpit"), + (b'EYEPOINT_RIGHT', "Move eyepoint right", "Shared Cockpit"), + (b'EYEPOINT_LEFT', "Move eyepoint left", "Shared Cockpit"), + (b'EYEPOINT_FORWARD', "Move eyepoint forward", "Shared Cockpit"), + (b'EYEPOINT_BACK', "Move eyepoint backward", "Shared Cockpit"), + (b'EYEPOINT_RESET', "Move eyepoint to default position", "Shared Cockpit"), + (b'NEW_MAP', "Opens new map view", "Shared Cockpit"), + (b'VIEW_COCKPIT_FORWARD', "Switch immediately to the forward view, in 2D mode.", "Shared Cockpit"), + (b'VIEW_VIRTUAL_COCKPIT_FORWARD', "Switch immediately to the forward view, in virtual cockpit mode.", "Shared Cockpit"), + (b'VIEW_PANEL_ALPHA_SET', "Sets the alpha-blending value for the panel. Takes a parameter in the range 0 to 255. The alpha-blending can be changed from the keyboard using Ctrl-Shift-T, and the plus and minus keys.", "Shared Cockpit"), + (b'VIEW_PANEL_ALPHA_SELECT', "Sets the mode to change the alpha-blending, so the keys KEY_PLUS and KEY_MINUS increment and decrement the value.", "Shared Cockpit"), + (b'VIEW_PANEL_ALPHA_INC', "Increment alpha-blending for the panel.", "Shared Cockpit"), + (b'VIEW_PANEL_ALPHA_DEC', "Decrement alpha-blending for the panel.", "Shared Cockpit"), + (b'VIEW_LINKING_SET', "Links all the views from one camera together, so that panning the view will change the view of all the linked cameras.", "Shared Cockpit"), + (b'VIEW_LINKING_TOGGLE', "Turns view linking on or off.", "Shared Cockpit"), + (b'VIEW_CHASE_DISTANCE_ADD', "Increments the distance of the view camera from the chase object (such as in Spot Plane view, or viewing an AI controlled aircraft),.", "Shared Cockpit"), + (b'VIEW_CHASE_DISTANCE_SUB', "Decrements the distance of the view camera from the chase object.", "Shared Cockpit"), + ] + + class __Miscellaneous_Events(EventHelper): + list = [ + (b'PAUSE_TOGGLE', "Toggles pause on/off", "Disabled"), + (b'PAUSE_ON', "Turns pause on", "Disabled"), + (b'PAUSE_OFF', "Turns pause off", "Disabled"), + (b'PAUSE_SET', "Sets pause on/off (1,0),", "Disabled"), + (b'DEMO_STOP', "Stops demo system playback", "Shared Cockpit"), + (b'SELECT_1', "Sets \"selected\" index (for other events), to 1", "Shared Cockpit"), + (b'SELECT_2', "Sets \"selected\" index (for other events), to 2", "Shared Cockpit"), + (b'SELECT_3', "Sets \"selected\" index (for other events), to 3", "Shared Cockpit"), + (b'SELECT_4', "Sets \"selected\" index (for other events), to 4", "Shared Cockpit"), + (b'MINUS', "Used in conjunction with \"selected\" parameters to decrease their value (e.g., radio frequency),", "Shared Cockpit"), + (b'PLUS', "Used in conjunction with \"selected\" parameters to increase their value (e.g., radio frequency),", "Shared Cockpit"), + (b'ZOOM_1X', "Sets zoom level to 1", "Shared Cockpit"), + (b'SOUND_TOGGLE', "Toggles sound on/off", "Shared Cockpit"), + (b'SIM_RATE', "Selects simulation rate (use KEY_MINUS, KEY_PLUS to change),", "Shared Cockpit"), + (b'JOYSTICK_CALIBRATE', "Toggles joystick on/off", "Shared Cockpit"), + (b'SITUATION_SAVE', "Saves flight situation", "Shared Cockpit"), + (b'SITUATION_RESET', "Resets flight situation", "Shared Cockpit"), + (b'SOUND_SET', "Sets sound on/off (1,0),", "Shared Cockpit"), + (b'EXIT', "Quit ESP with a message", "Shared Cockpit"), + (b'ABORT', "Quit ESP without a message", "Shared Cockpit"), + (b'READOUTS_SLEW', "Cycle through information readouts while in slew", "Shared Cockpit"), + (b'READOUTS_FLIGHT', "Cycle through information readouts", "Shared Cockpit"), + (b'MINUS_SHIFT', "Used with other events", "Shared Cockpit"), + (b'PLUS_SHIFT', "Used with other events", "Shared Cockpit"), + (b'SIM_RATE_INCR', "Increase sim rate", "Shared Cockpit"), + (b'SIM_RATE_DECR', "Decrease sim rate", "Shared Cockpit"), + (b'KNEEBOARD_VIEW', "Toggles kneeboard", "Shared Cockpit"), + (b'PANEL_1', "Toggles panel 1", "Shared Cockpit"), + (b'PANEL_2', "Toggles panel 2", "Shared Cockpit"), + (b'PANEL_3', "Toggles panel 3", "Shared Cockpit"), + (b'PANEL_4', "Toggles panel 4", "Shared Cockpit"), + (b'PANEL_5', "Toggles panel 5", "Shared Cockpit"), + (b'PANEL_6', "Toggles panel 6", "Shared Cockpit"), + (b'PANEL_7', "Toggles panel 7", "Shared Cockpit"), + (b'PANEL_8', "Toggles panel 8", "Shared Cockpit"), + (b'PANEL_9', "Toggles panel 9", "Shared Cockpit"), + (b'SOUND_ON', "Turns sound on", "Shared Cockpit"), + (b'SOUND_OFF', "Turns sound off", "Shared Cockpit"), + (b'INVOKE_HELP', "Brings up Help system", "Shared Cockpit"), + (b'TOGGLE_AIRCRAFT_LABELS', "Toggles aircraft labels", "Shared Cockpit"), + (b'FLIGHT_MAP', "Brings up flight map", "Shared Cockpit"), + (b'RELOAD_PANELS', "Reload panel data", "Shared Cockpit"), + (b'PANEL_ID_TOGGLE', "Toggles indexed panel (1 to 9),", "Shared Cockpit"), + (b'PANEL_ID_OPEN', "Opens indexed panel (1 to 9),", "Shared Cockpit"), + (b'PANEL_ID_CLOSE', "Closes indexed panel (1 to 9),", "Shared Cockpit"), + (b'RELOAD_USER_AIRCRAFT', "Reloads the user aircraft data (from cache if same type loaded as an AI, otherwise from disk),", "Shared Cockpit"), + (b'SIM_RESET', "Resets aircraft state", "Shared Cockpit"), + (b'VIRTUAL_COPILOT_TOGGLE', "Turns Flying Tips on/off", "Shared Cockpit"), + (b'VIRTUAL_COPILOT_SET', "Sets Flying Tips on/off (1,0),", "Shared Cockpit"), + (b'VIRTUAL_COPILOT_ACTION', "Triggers action noted in Flying Tips", "Shared Cockpit"), + (b'REFRESH_SCENERY', "Reloads scenery", "Shared Cockpit"), + (b'CLOCK_HOURS_DEC', "Decrements time by hours", "Shared Cockpit"), + (b'CLOCK_HOURS_INC', "Increments time by hours", "Shared Cockpit"), + (b'CLOCK_MINUTES_DEC', "Decrements time by minutes", "Shared Cockpit"), + (b'CLOCK_MINUTES_INC', "Increments time by minutes", "Shared Cockpit"), + (b'CLOCK_SECONDS_ZERO', "Zeros seconds", "Shared Cockpit"), + (b'CLOCK_HOURS_SET', "Sets hour of day", "Shared Cockpit"), + (b'CLOCK_MINUTES_SET', "Sets minutes of the hour", "Shared Cockpit"), + (b'ZULU_HOURS_SET', "Sets hours, zulu time", "Shared Cockpit"), + (b'ZULU_MINUTES_SET', "Sets minutes, in zulu time", "Shared Cockpit"), + (b'ZULU_DAY_SET', "Sets day, in zulu time", "Shared Cockpit"), + (b'ZULU_YEAR_SET', "Sets year, in zulu time", "Shared Cockpit"), + (b'GAUGE_KEYSTROKE', "Enables a keystroke to be sent to a gauge that is in focus. The keystrokes can only be in the range 0 to 9, A to Z, and the four keys: plus, minus, comma and period. This is typically used to allow some keyboard entry to a complex device such as a GPS to enter such things as ICAO codes using the keyboard, rather than turning dials.", "Shared Cockpit"), + (b'SIMUI_WINDOW_HIDESHOW', "Display the ATC window.", "Shared Cockpit"), + (b'VIEW_WINDOW_TITLES_TOGGLE', "Turn window titles on or off.", "Shared Cockpit"), + (b'AXIS_PAN_PITCH', "Sets the pitch of the axis. Requires an angle.", "Shared Cockpit"), + (b'AXIS_PAN_HEADING', "Sets the heading of the axis. Requires an angle.", "Shared Cockpit"), + (b'AXIS_PAN_TILT', "Sets the tilt of the axis. Requires an angle.", "Shared Cockpit"), + (b'VIEW_AXIS_INDICATOR_CYCLE', "Step through the view axes.", "Shared Cockpit"), + (b'VIEW_MAP_ORIENTATION_CYCLE', "Step through the map orientations.", "Shared Cockpit"), + (b'TOGGLE_JETWAY', "Requests a jetway, which will only be answered if the aircraft is at a parking spot.", "Shared Cockpit"), + (b'VIDEO_RECORD_TOGGLE', '''Turn on or off the video recording feature. This records uncompressed AVI format files to: + My Documents\\My Videos\\''', "Shared Cockpit"), + (b'TOGGLE_AIRPORT_NAME_DISPLAY', "Turn on or off the airport name.", "Shared Cockpit"), + (b'CAPTURE_SCREENSHOT', '''Capture the current view as a screenshot. Which will be saved to a bmp file in: + My Documents\\My Pictures\\''', "Shared Cockpit"), + (b'MOUSE_LOOK_TOGGLE', "Switch Mouse Look mode on or off. Mouse Look mode enables a user to control their view using the mouse, and holding down the space bar.", "Shared Cockpit"), + (b'YAXIS_INVERT_TOGGLE', "Switch inversion of Y axis controls on or off.", "Shared Cockpit"), + (b'AUTORUDDER_TOGGLE', "Turn the automatic rudder control feature on or off.", "Shared Cockpit"), + ] + + class __Freezing_position(EventHelper): + list = [ + (b'FREEZE_LATITUDE_LONGITUDE_TOGGLE', '''Turns the freezing of the lat/lon position of the aircraft (either user or AI controlled), on or off. If this key event is set, it means that the latitude and longitude of the aircraft are not being controlled by ESP, so enabling, for example, a SimConnect client to control the position of the aircraft. This can also apply to altitude and attitude. Refer to the simulation variables: + IS LATITUDE LONGITUDE FREEZE ON, + IS ALTITUDE FREEZE ON, and + IS ATTITUDE FREEZE ON + Refer also to the SimConnect_AIReleaseControl function. ''', "Shared Cockpit"), + (b'FREEZE_LATITUDE_LONGITUDE_SET', "Freezes the lat/lon position of the aircraft.", "Shared Cockpit"), + (b'FREEZE_ALTITUDE_TOGGLE', "Turns the freezing of the altitude of the aircraft on or off.", "Shared Cockpit"), + (b'FREEZE_ALTITUDE_SET', "Freezes the altitude of the aircraft..", "Shared Cockpit"), + (b'FREEZE_ATTITUDE_TOGGLE', "Turns the freezing of the attitude (pitch, bank and heading), of the aircraft on or off.", "Shared Cockpit"), + (b'FREEZE_ATTITUDE_SET', "Freezes the attitude (pitch, bank and heading), of the aircraft.", "Shared Cockpit"), + ] + + class __Mission_Keys(EventHelper): + list = [ + (b'POINT_OF_INTEREST_TOGGLE_POINTER', "Turn the point-of-interest indicator (often a light beam), on or off. Refer to the Missions system documentation.", "Shared Cockpit"), + (b'POINT_OF_INTEREST_CYCLE_PREVIOUS', "Change the current point-of-interest to the previous point-of-interest.", "Shared Cockpit"), + (b'POINT_OF_INTEREST_CYCLE_NEXT', "Change the current point-of-interest to the next point-of-interest.", "Shared Cockpit"), + ] + + class __ATC(EventHelper): + list = [ + (b'ATC', "Activates ATC window", "Shared Cockpit"), + (b'ATC_MENU_1', "Selects ATC option 1", "Shared Cockpit"), + (b'ATC_MENU_2', "Selects ATC option 2", "Shared Cockpit"), + (b'ATC_MENU_3', "Selects ATC option 3", "Shared Cockpit"), + (b'ATC_MENU_4', "Selects ATC option 4", "Shared Cockpit"), + (b'ATC_MENU_5', "Selects ATC option 5", "Shared Cockpit"), + (b'ATC_MENU_6', "Selects ATC option 6", "Shared Cockpit"), + (b'ATC_MENU_7', "Selects ATC option 7", "Shared Cockpit"), + (b'ATC_MENU_8', "Selects ATC option 8", "Shared Cockpit"), + (b'ATC_MENU_9', "Selects ATC option 9", "Shared Cockpit"), + (b'ATC_MENU_0', "Selects ATC option 10", "Shared Cockpit"), + ] + + class __Multiplayer(EventHelper): + list = [ + (b'MP_TRANSFER_CONTROL', "Toggle to the next player to track", "-"), + (b'MP_PLAYER_CYCLE', "Cycle through the current user aircraft.", "Shared Cockpit"), + (b'MP_PLAYER_FOLLOW', "Set the view to follow the selected user aircraft.", "Shared Cockpit"), + (b'MP_CHAT', "Toggles chat window visible/invisible", "Shared Cockpit"), + (b'MP_ACTIVATE_CHAT', "Activates chat window", "Shared Cockpit"), + (b'MP_VOICE_CAPTURE_START', "Start capturing audio from the users computer and transmitting it to all other players in the multiplayer session who are turned to the same radio frequency.", "Shared Cockpit"), + (b'MP_VOICE_CAPTURE_STOP', "Stop capturing radio audio.", "Shared Cockpit"), + (b'MP_BROADCAST_VOICE_CAPTURE_START', "Start capturing audio from the users computer and transmitting it to all other players in the multiplayer session.", "Shared Cockpit"), + (b'MP_BROADCAST_VOICE_CAPTURE_STOP', "Stop capturing broadcast audio.", "Shared Cockpit"), + (b'TOGGLE_RACERESULTS_WINDOW', "Show or hide multi-player race results.", "Disabled"), + ] + + class __G1000_PFD(EventHelper): + list = [ + (b'G1000_PFD_FLIGHTPLAN_BUTTON', "The primary flight display (PFD) should display its current flight plan.", "Shared Cockpit"), + (b'G1000_PFD_PROCEDURE_BUTTON', "Turn to the Procedure page.", "Shared Cockpit"), + (b'G1000_PFD_ZOOMIN_BUTTON', "Zoom in on the current map.", "Shared Cockpit"), + (b'G1000_PFD_ZOOMOUT_BUTTON', "Zoom out on the current map.", "Shared Cockpit"), + (b'G1000_PFD_DIRECTTO_BUTTON', "Turn to the Direct To page.", "Shared Cockpit"), + (b'G1000_PFD_MENU_BUTTON', "If a segmented flight plan is highlighted, activates the associated menu.", "Shared Cockpit"), + (b'G1000_PFD_CLEAR_BUTTON', "Clears the current input.", "Shared Cockpit"), + (b'G1000_PFD_ENTER_BUTTON', "Enters the current input.", "Shared Cockpit"), + (b'G1000_PFD_CURSOR_BUTTON', "Turns on or off a screen cursor.", "Shared Cockpit"), + (b'G1000_PFD_GROUP_KNOB_INC', "Step up through the page groups.", "Shared Cockpit"), + (b'G1000_PFD_GROUP_KNOB_DEC', "Step down through the page groups.", "Shared Cockpit"), + (b'G1000_PFD_PAGE_KNOB_INC', "Step up through the individual pages.", "Shared Cockpit"), + (b'G1000_PFD_PAGE_KNOB_DEC', "Step down through the individual pages.", "Shared Cockpit"), + ] + # G1000_PFD_SOFTKEY1, G1000_PFD_SOFTKEY12 Initiate the action for the icon displayed in the softkey position. Shared Cockpit + + class __G1000_MFD(EventHelper): + list = [ + (b'G1000_MFD_FLIGHTPLAN_BUTTON', "The multifunction display (MFD) should display its current flight plan.", "Shared Cockpit"), + (b'G1000_MFD_PROCEDURE_BUTTON', "Turn to the Procedure page.", "Shared Cockpit"), + (b'G1000_MFD_ZOOMIN_BUTTON', "Zoom in on the current map.", "Shared Cockpit"), + (b'G1000_MFD_ZOOMOUT_BUTTON', "Zoom out on the current map.", "Shared Cockpit"), + (b'G1000_MFD_DIRECTTO_BUTTON', "Turn to the Direct To page.", "Shared Cockpit"), + (b'G1000_MFD_MENU_BUTTON', "If a segmented flight plan is highlighted, activates the associated menu.", "Shared Cockpit"), + (b'G1000_MFD_CLEAR_BUTTON', "Clears the current input.", "Shared Cockpit"), + (b'G1000_MFD_ENTER_BUTTON', "Enters the current input.", "Shared Cockpit"), + (b'G1000_MFD_CURSOR_BUTTON', "Turns on or off a screen cursor.", "Shared Cockpit"), + (b'G1000_MFD_GROUP_KNOB_INC', "Step up through the page groups.", "Shared Cockpit"), + (b'G1000_MFD_GROUP_KNOB_DEC', "Step down through the page groups.", "Shared Cockpit"), + (b'G1000_MFD_PAGE_KNOB_INC', "Step up through the individual pages.", "Shared Cockpit"), + (b'G1000_MFD_PAGE_KNOB_DEC', "Step down through the individual pages.", "Shared Cockpit"), + ] + # G1000_MFD_SOFTKEY1, G1000_MFD_SOFTKEY12 Initiate the action for the icon displayed in the softkey position. Shared Cockpit diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect/FacilitiesList.py b/templates/skills/msfs2020_control/dependencies/SimConnect/FacilitiesList.py new file mode 100644 index 00000000..bbc0d918 --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect/FacilitiesList.py @@ -0,0 +1,112 @@ +from SimConnect import * +from .Enum import * +from .Constants import * + + +class Facilitie(object): + def __init__(self): + pass + + +class FacilitiesHelper: + def __init__(self, _sm, _parent): + self.sm = _sm + self.parent = _parent + self.REQUEST_ID = _sm.new_request_id() + self.item = None + self.sm.Facilities.append(self) + + def subscribe(self, _cbfunc): + if self.item < SIMCONNECT_FACILITY_LIST_TYPE.SIMCONNECT_FACILITY_LIST_TYPE_COUNT: + self.cb = _cbfunc + hr = self.sm.dll.SubscribeToFacilities( + self.sm.hSimConnect, + SIMCONNECT_FACILITY_LIST_TYPE(self.item), + self.REQUEST_ID.value + ) + + def unsubscribe(self): + self.cb = None + hr = self.sm.dll.UnsubscribeToFacilities( + self.sm.hSimConnect, + SIMCONNECT_FACILITY_LIST_TYPE(self.item) + ) + + def get(self): + # Get the current cached list of airports, waypoints, etc, as the item indicates + if self.item < SIMCONNECT_FACILITY_LIST_TYPE.SIMCONNECT_FACILITY_LIST_TYPE_COUNT: + hr = self.sm.dll.RequestFacilitiesList( + self.sm.hSimConnect, + SIMCONNECT_FACILITY_LIST_TYPE(self.item), + self.REQUEST_ID.value + ) + # self.sm.run() + + +class FacilitiesRequests(): + def __init__(self, _sm): + self.sm = _sm + self.list = [] + self.Airports = self.__FACILITY_AIRPORT(_sm, self) + self.list.append(self.Airports) + self.Waypoints = self.__FACILITY_WAYPOINT(_sm, self) + self.list.append(self.Waypoints) + self.NDBs = self.__FACILITY_NDB(_sm, self) + self.list.append(self.NDBs) + self.VORs = self.__FACILITY_VOR(_sm, self) + self.list.append(self.VORs) + + def dump(self, pList): + pList = cast(pList, POINTER(SIMCONNECT_RECV_FACILITIES_LIST)) + List = pList.contents + print("RequestID: %d dwArraySize: %d dwEntryNumber: %d dwOutOf: %d" % ( + List.dwRequestID, List.dwArraySize, List.dwEntryNumber, List.dwOutOf) + ) + + # Dump various facility elements + class __FACILITY_AIRPORT(FacilitiesHelper): + def __init__(self, _sm, _parent): + super().__init__(_sm, _parent) + self.item = SIMCONNECT_FACILITY_LIST_TYPE.SIMCONNECT_FACILITY_LIST_TYPE_AIRPORT + + def dump(self, pFac): + pFac = cast(pFac, POINTER(SIMCONNECT_DATA_FACILITY_AIRPORT)) + Fac = pFac.contents + print("Icao: %s Latitude: %lg Longitude: %lg Altitude: %lg" % ( + Fac.Icao.decode(), Fac.Latitude, Fac.Longitude, Fac.Altitude) + ) + + class __FACILITY_WAYPOINT(FacilitiesHelper): + def __init__(self, _sm, _parent): + super().__init__(_sm, _parent) + self.item = SIMCONNECT_FACILITY_LIST_TYPE.SIMCONNECT_FACILITY_LIST_TYPE_WAYPOINT + + def dump(self, pFac): + pFac = cast(pFac, POINTER(SIMCONNECT_DATA_FACILITY_WAYPOINT)) + Fac = pFac.contents + self.parent.Airports.dump(pFac) + print("\tfMagVar: %g" % (Fac.fMagVar)) + + class __FACILITY_NDB(FacilitiesHelper): + def __init__(self, _sm, _parent): + super().__init__(_sm, _parent) + self.item = SIMCONNECT_FACILITY_LIST_TYPE.SIMCONNECT_FACILITY_LIST_TYPE_NDB + + def dump(self, pFac): + pFac = cast(pFac, POINTER(SIMCONNECT_DATA_FACILITY_NDB)) + Fac = pFac.contents + self.parent.Waypoints.dump(pFac) + print("\t\tfFrequency: %d" % (Fac.fFrequency)) + + class __FACILITY_VOR(FacilitiesHelper): + def __init__(self, _sm, _parent): + super().__init__(_sm, _parent) + self.item = SIMCONNECT_FACILITY_LIST_TYPE.SIMCONNECT_FACILITY_LIST_TYPE_VOR + + def dump(self, pFac): + pFac = cast(pFac, POINTER(SIMCONNECT_DATA_FACILITY_VOR)) + Fac = pFac.contents + self.parent.NDBs.dump(pFac) + print("\t\t\tFlags: %x fLocalizer: %f GlideLat: %lg GlideLon: %lg GlideAlt: %lg fGlideSlopeAngle: %f" % ( + Fac.Flags, Fac.fLocalizer, Fac.GlideLat, Fac.GlideLon, Fac.GlideAlt, Fac.fGlideSlopeAngle) + ) diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect/RequestList.py b/templates/skills/msfs2020_control/dependencies/SimConnect/RequestList.py new file mode 100644 index 00000000..ccf16f59 --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect/RequestList.py @@ -0,0 +1,1172 @@ +from SimConnect import * +from .Enum import * +from .Constants import * + + +class Request(object): + + def get(self): + return self.value + + def set(self, _value): + self.value = _value + + @property + def value(self): + if self._deff_test(): + # self.sm.run() + if (self.LastData + self.time) < millis(): + if self.sm.get_data(self): + self.LastData = millis() + else: + return None + return self.outData + else: + return None + + @value.setter + def value(self, val): + if self._deff_test() and self.settable: + self.outData = val + self.sm.set_data(self) + # self.sm.run() + + def __init__(self, _deff, _sm, _time=10, _dec=None, _settable=False, _attemps=10): + self.DATA_DEFINITION_ID = None + self.definitions = [] + self.description = _dec + self._name = None + self.definitions.append(_deff) + self.outData = None + self.attemps = _attemps + self.sm = _sm + self.time = _time + self.defined = False + self.settable = _settable + self.LastData = 0 + self.LastID = 0 + if ':index' in str(self.definitions[0][0]): + self.lastIndex = b':index' + + def setIndex(self, index): + if not hasattr(self, "lastIndex"): + return False + (dec, stype) = self.definitions[0] + newindex = str(":" + str(index)).encode() + if newindex == self.lastIndex: + return + dec = dec.replace(self.lastIndex, newindex) + self.lastIndex = newindex + self.definitions[0] = (dec, stype) + self.redefine() + return True + + def redefine(self): + if self.DATA_DEFINITION_ID is not None: + self.sm.dll.ClearDataDefinition( + self.sm.hSimConnect, + self.DATA_DEFINITION_ID.value, + ) + self.defined = False + # self.sm.run() + if self._deff_test(): + # self.sm.run() + self.sm.get_data(self) + + def _deff_test(self): + if ':index' in str(self.definitions[0][0]): + self.lastIndex = b':index' + return False + if self.defined is True: + return True + if self.DATA_DEFINITION_ID is None: + self.DATA_DEFINITION_ID = self.sm.new_def_id() + self.DATA_REQUEST_ID = self.sm.new_request_id() + self.outData = None + self.sm.Requests[self.DATA_REQUEST_ID.value] = self + + rtype = self.definitions[0][1] + DATATYPE = SIMCONNECT_DATATYPE.SIMCONNECT_DATATYPE_FLOAT64 + if 'String' in rtype.decode() or 'string' in rtype.decode(): + rtype = None + DATATYPE = SIMCONNECT_DATATYPE.SIMCONNECT_DATATYPE_STRINGV + + err = self.sm.dll.AddToDataDefinition( + self.sm.hSimConnect, + self.DATA_DEFINITION_ID.value, + self.definitions[0][0], + rtype, + DATATYPE, + 0, + SIMCONNECT_UNUSED, + ) + if self.sm.IsHR(err, 0): + self.defined = True + temp = DWORD(0) + self.sm.dll.GetLastSentPacketID(self.sm.hSimConnect, temp) + self.LastID = temp.value + return True + else: + LOGGER.error("SIM def" + str(self.definitions[0])) + return False + + +class RequestHelper: + def __init__(self, _sm, _time=10, _attemps=10): + self.sm = _sm + self.dic = [] + self.time = _time + self.attemps = _attemps + + def __getattribute__(self, _name): + return super().__getattribute__(_name) + + def __getattr__(self, _name): + if _name in self.list: + key = self.list.get(_name) + setable = False + if key[3] == 'Y': + setable = True + ne = Request((key[1], key[2]), self.sm, _dec=key[0], _settable=setable, _time=self.time, _attemps=self.attemps) + setattr(self, _name, ne) + return ne + return None + + def get(self, _name): + if getattr(self, _name) is None: + return None + return getattr(self, _name).value + + def set(self, _name, _value=0): + temp = getattr(self, _name) + if temp is None: + return False + if not getattr(temp, "settable"): + return False + + setattr(temp, "value", _value) + return True + + def json(self): + map = {} + for att in self.list: + val = self.get(att) + if val is not None: + try: + map[att] = val.value + except AttributeError: + map[att] = val + return map + + +class AircraftRequests(): + def find(self, key): + index = None + if ':' in key: + (keyname, index) = key.split(":", 1) + key = "%s:index" % (keyname) + + for clas in self.list: + if key in clas.list: + rqest = getattr(clas, key) + if index is not None: + rqest.setIndex(index) + return rqest + return None + + def get(self, key): + request = self.find(key) + if request is None: + return None + return request.value + + def set(self, key, _value): + request = self.find(key) + if request is None: + return False + request.value = _value + return True + + def __init__(self, _sm, _time=10, _attemps=10): + self.sm = _sm + self.list = [] + self.EngineData = self.__AircraftEngineData(_sm, _time, _attemps) + self.list.append(self.EngineData) + self.FuelTankSelection = self.__FuelTankSelection(_sm, _time, _attemps) + self.list.append(self.FuelTankSelection) + self.FuelData = self.__AircraftFuelData(_sm, _time, _attemps) + self.list.append(self.FuelData) + self.LightsData = self.__AircraftLightsData(_sm, _time, _attemps) + self.list.append(self.LightsData) + self.PositionandSpeedData = self.__AircraftPositionandSpeedData(_sm, _time, _attemps) + self.list.append(self.PositionandSpeedData) + self.FlightInstrumentationData = self.__AircraftFlightInstrumentationData(_sm, _time, _attemps) + self.list.append(self.FlightInstrumentationData) + self.AvionicsData = self.__AircraftAvionicsData(_sm, _time, _attemps) + self.list.append(self.AvionicsData) + self.ControlsData = self.__AircraftControlsData(_sm, _time, _attemps) + self.list.append(self.ControlsData) + self.AutopilotData = self.__AircraftAutopilotData(_sm, _time, _attemps) + self.list.append(self.AutopilotData) + self.LandingGearData = self.__AircraftLandingGearData(_sm, _time, _attemps) + self.list.append(self.LandingGearData) + self.AircraftEnvironmentData = self.__AircraftEnvironmentData(_sm, _time, _attemps) + self.list.append(self.AircraftEnvironmentData) + self.HelicopterSpecificData = self.__HelicopterSpecificData(_sm, _time, _attemps) + self.list.append(self.HelicopterSpecificData) + self.MiscellaneousSystemsData = self.__AircraftMiscellaneousSystemsData(_sm, _time, _attemps) + self.list.append(self.MiscellaneousSystemsData) + self.MiscellaneousData = self.__AircraftMiscellaneousData(_sm, _time, _attemps) + self.list.append(self.MiscellaneousData) + self.StringData = self.__AircraftStringData(_sm, _time, _attemps) + self.list.append(self.StringData) + self.AIControlledAircraft = self.__AIControlledAircraft(_sm, _time, _attemps) + self.list.append(self.AIControlledAircraft) + self.CarrierOperations = self.__CarrierOperations(_sm, _time, _attemps) + self.list.append(self.CarrierOperations) + self.Racing = self.__Racing(_sm, _time, _attemps) + self.list.append(self.Racing) + self.EnvironmentData = self.__EnvironmentData(_sm, _time, _attemps) + self.list.append(self.EnvironmentData) + self.SlingsandHoists = self.__SlingsandHoists(_sm, _time, _attemps) + self.list.append(self.SlingsandHoists) + + class __AircraftEngineData(RequestHelper): + list = { + "NUMBER_OF_ENGINES": ["Number of engines (minimum 0, maximum 4)", b'NUMBER OF ENGINES', b'Number', 'N'], + "ENGINE_CONTROL_SELECT": ["Selected engines (combination of bit flags); 1 = Engine 1; 2 = Engine 2; 4 = Engine 3; 8 = Engine 4", b'ENGINE CONTROL SELECT', b'Mask', 'Y'], + "THROTTLE_LOWER_LIMIT": ["Percent throttle defining lower limit (negative for reverse thrust equipped airplanes)", b'THROTTLE LOWER LIMIT', b'Percent', 'N'], + "ENGINE_TYPE": ["Engine type:; 0 = Piston; 1 = Jet; 2 = None; 3 = Helo(Bell) turbine; 4 = Unsupported; 5 = Turboprop", b'ENGINE TYPE', b'Enum', 'N'], + "MASTER_IGNITION_SWITCH": ["Aircraft master ignition switch (grounds all engines magnetos)", b'MASTER IGNITION SWITCH', b'Bool', 'N'], + "GENERAL_ENG_COMBUSTION:index": ["Combustion flag", b'GENERAL ENG COMBUSTION:index', b'Bool', 'Y'], + "GENERAL_ENG_MASTER_ALTERNATOR:index": ["Alternator (generator) switch", b'GENERAL ENG MASTER ALTERNATOR:index', b'Bool', 'N'], + "GENERAL_ENG_FUEL_PUMP_SWITCH:index": ["Fuel pump switch", b'GENERAL ENG FUEL PUMP SWITCH:index', b'Bool', 'N'], + "GENERAL_ENG_FUEL_PUMP_ON:index": ["Fuel pump on/off", b'GENERAL ENG FUEL PUMP ON:index', b'Bool', 'N'], + "GENERAL_ENG_RPM:index": ["Engine rpm", b'GENERAL ENG RPM:index', b'Rpm', 'N'], + "GENERAL_ENG_PCT_MAX_RPM:index": ["Percent of max rated rpm", b'GENERAL ENG PCT MAX RPM:index', b'Percent', 'N'], + "GENERAL_ENG_MAX_REACHED_RPM:index": ["Maximum attained rpm", b'GENERAL ENG MAX REACHED RPM:index', b'Rpm', 'N'], + "GENERAL_ENG_THROTTLE_LEVER_POSITION:index": ["Percent of max throttle position", b'GENERAL ENG THROTTLE LEVER POSITION:index', b'Percent', 'Y'], + "GENERAL_ENG_MIXTURE_LEVER_POSITION:index": ["Percent of max mixture lever position", b'GENERAL ENG MIXTURE LEVER POSITION:index', b'Percent', 'Y'], + "GENERAL_ENG_PROPELLER_LEVER_POSITION:index": ["Percent of max prop lever position", b'GENERAL ENG PROPELLER LEVER POSITION:index', b'Percent', 'Y'], + "GENERAL_ENG_STARTER:index": ["Engine starter on/off", b'GENERAL ENG STARTER:index', b'Bool', 'N'], + "GENERAL_ENG_EXHAUST_GAS_TEMPERATURE:index": ["Engine exhaust gas temperature.", b'GENERAL ENG EXHAUST GAS TEMPERATURE:index', b'Rankine', 'Y'], + "GENERAL_ENG_OIL_PRESSURE:index": ["Engine oil pressure", b'GENERAL ENG OIL PRESSURE:index', b'Psf', 'Y'], + "GENERAL_ENG_OIL_LEAKED_PERCENT:index": ["Percent of max oil capacity leaked", b'GENERAL ENG OIL LEAKED PERCENT:index', b'Percent', 'N'], + "GENERAL_ENG_COMBUSTION_SOUND_PERCENT:index": ["Percent of maximum engine sound", b'GENERAL ENG COMBUSTION SOUND PERCENT:index', b'Percent', 'N'], + "GENERAL_ENG_DAMAGE_PERCENT:index": ["Percent of total engine damage", b'GENERAL ENG DAMAGE PERCENT:index', b'Percent', 'N'], + "GENERAL_ENG_OIL_TEMPERATURE:index": ["Engine oil temperature", b'GENERAL ENG OIL TEMPERATURE:index', b'Rankine', 'Y'], + "GENERAL_ENG_FAILED:index": ["Fail flag", b'GENERAL ENG FAILED:index', b'Bool', 'N'], + "GENERAL_ENG_GENERATOR_SWITCH:index": ["Alternator (generator) switch", b'GENERAL ENG GENERATOR SWITCH:index', b'Bool', 'N'], + "GENERAL_ENG_GENERATOR_ACTIVE:index": ["Alternator (generator) on/off", b'GENERAL ENG GENERATOR ACTIVE:index', b'Bool', 'Y'], + "GENERAL_ENG_ANTI_ICE_POSITION:index": ["Engine anti-ice switch", b'GENERAL ENG ANTI ICE POSITION:index', b'Bool', 'N'], + "GENERAL_ENG_FUEL_VALVE:index": ["Fuel valve state", b'GENERAL ENG FUEL VALVE:index', b'Bool', 'N'], + "GENERAL_ENG_FUEL_PRESSURE:index": ["Engine fuel pressure", b'GENERAL ENG FUEL PRESSURE:index', b'Psi', 'Y'], + "GENERAL_ENG_ELAPSED_TIME:index": ["Total engine elapsed time", b'GENERAL ENG ELAPSED TIME:index', b'Hours', 'N'], + "RECIP_ENG_COWL_FLAP_POSITION:index": ["Percent cowl flap opened", b'RECIP ENG COWL FLAP POSITION:index', b'Percent', 'Y'], + "RECIP_ENG_PRIMER:index": ["Engine primer position", b'RECIP ENG PRIMER:index', b'Bool', 'Y'], + "RECIP_ENG_MANIFOLD_PRESSURE:index": ["Engine manifold pressure", b'RECIP ENG MANIFOLD PRESSURE:index', b'Psi', 'Y'], + "RECIP_ENG_ALTERNATE_AIR_POSITION:index": ["Alternate air control", b'RECIP ENG ALTERNATE AIR POSITION:index', b'Position', 'Y'], + "RECIP_ENG_COOLANT_RESERVOIR_PERCENT:index": ["Percent coolant available", b'RECIP ENG COOLANT RESERVOIR PERCENT:index', b'Percent', 'Y'], + "RECIP_ENG_LEFT_MAGNETO:index": ["Left magneto state", b'RECIP ENG LEFT MAGNETO:index', b'Bool', 'Y'], + "RECIP_ENG_RIGHT_MAGNETO:index": ["Right magneto state", b'RECIP ENG RIGHT MAGNETO:index', b'Bool', 'Y'], + "RECIP_ENG_BRAKE_POWER:index": ["Brake power produced by engine", b'RECIP ENG BRAKE POWER:index', b'Foot pounds per second', 'Y'], + "RECIP_ENG_STARTER_TORQUE:index": ["Torque produced by engine", b'RECIP ENG STARTER TORQUE:index', b'Foot pound', 'Y'], + "RECIP_ENG_TURBOCHARGER_FAILED:index": ["Turbo failed state", b'RECIP ENG TURBOCHARGER FAILED:index', b'Bool', 'Y'], + "RECIP_ENG_EMERGENCY_BOOST_ACTIVE:index": ["War emergency power active", b'RECIP ENG EMERGENCY BOOST ACTIVE:index', b'Bool', 'Y'], + "RECIP_ENG_EMERGENCY_BOOST_ELAPSED_TIME:index": ["Elapsed time war emergency power active", b'RECIP ENG EMERGENCY BOOST ELAPSED TIME:index', b'Hours', 'Y'], + "RECIP_ENG_WASTEGATE_POSITION:index": ["Percent turbo wastegate closed", b'RECIP ENG WASTEGATE POSITION:index', b'Percent', 'Y'], + "RECIP_ENG_TURBINE_INLET_TEMPERATURE:index": ["Engine turbine inlet temperature", b'RECIP ENG TURBINE INLET TEMPERATURE:index', b'Celsius', 'Y'], + "RECIP_ENG_CYLINDER_HEAD_TEMPERATURE:index": ["Engine cylinder head temperature", b'RECIP ENG CYLINDER HEAD TEMPERATURE:index', b'Celsius', 'Y'], + "RECIP_ENG_RADIATOR_TEMPERATURE:index": ["Engine radiator temperature", b'RECIP ENG RADIATOR TEMPERATURE:index', b'Celsius', 'Y'], + "RECIP_ENG_FUEL_AVAILABLE:index": ["True if fuel is available", b'RECIP ENG FUEL AVAILABLE:index', b'Bool', 'Y'], + "RECIP_ENG_FUEL_FLOW:index": ["Engine fuel flow", b'RECIP ENG FUEL FLOW:index', b'Pounds per hour', 'Y'], + "RECIP_ENG_FUEL_TANK_SELECTOR:index": ["Fuel tank selected for engine. See fuel tank list.", b'RECIP ENG FUEL TANK SELECTOR:index', b'Enum', 'N'], + "ENGINE_TYPE": ["Engine type:; 0 = Piston; 1 = Jet; 2 = None; 3 = Helo(Bell) turbine; 4 = Unsupported; 5 = Turboprop", b'ENGINE TYPE', b'Enum', 'N'], + "RECIP_ENG_FUEL_NUMBER_TANKS_USED:index": ["Number of tanks currently being used", b'RECIP ENG FUEL NUMBER TANKS USED:index', b'Number', 'N'], + "RECIP_CARBURETOR_TEMPERATURE:index": ["Carburetor temperature", b'RECIP CARBURETOR TEMPERATURE:index', b'Celsius', 'Y'], + "RECIP_MIXTURE_RATIO:index": ["Fuel / Air mixture ratio", b'RECIP MIXTURE RATIO:index', b'Ratio', 'Y'], + "TURB_ENG_N1:index": ["Turbine engine N1", b'TURB ENG N1:index', b'Percent', 'Y'], + "TURB_ENG_N2:index": ["Turbine engine N2", b'TURB ENG N2:index', b'Percent', 'Y'], + "TURB_ENG_CORRECTED_N1:index": ["Turbine engine corrected N1", b'TURB ENG CORRECTED N1:index', b'Percent', 'Y'], + "TURB_ENG_CORRECTED_N2:index": ["Turbine engine corrected N2", b'TURB ENG CORRECTED N2:index', b'Percent', 'Y'], + "TURB_ENG_CORRECTED_FF:index": ["Corrected fuel flow", b'TURB ENG CORRECTED FF:index', b'Pounds per hour', 'Y'], + "TURB_ENG_MAX_TORQUE_PERCENT:index": ["Percent of max rated torque", b'TURB ENG MAX TORQUE PERCENT:index', b'Percent', 'Y'], + "TURB_ENG_PRESSURE_RATIO:index": ["Engine pressure ratio", b'TURB ENG PRESSURE RATIO:index', b'Ratio', 'Y'], + "TURB_ENG_ITT:index": ["Engine ITT", b'TURB ENG ITT:index', b'Rankine', 'Y'], + "TURB_ENG_AFTERBURNER:index": ["Afterburner state", b'TURB ENG AFTERBURNER:index', b'Bool', 'N'], + "TURB_ENG_JET_THRUST:index": ["Engine jet thrust", b'TURB ENG JET THRUST:index', b'Pounds', 'N'], + "TURB_ENG_BLEED_AIR:index": ["Bleed air pressure", b'TURB ENG BLEED AIR:index', b'Psi', 'N'], + "TURB_ENG_TANK_SELECTOR:index": ["Fuel tank selected for engine. See fuel tank list.", b'TURB ENG TANK SELECTOR:index', b'Enum', 'N'], + "ENGINE_TYPE": ["Engine type:; 0 = Piston; 1 = Jet; 2 = None; 3 = Helo(Bell) turbine; 4 = Unsupported; 5 = Turboprop", b'ENGINE TYPE', b'Enum', 'N'], + "TURB_ENG_NUM_TANKS_USED:index": ["Number of tanks currently being used", b'TURB ENG NUM TANKS USED:index', b'Number', 'N'], + "TURB_ENG_FUEL_FLOW_PPH:index": ["Engine fuel flow", b'TURB ENG FUEL FLOW PPH:index', b'Pounds per hour', 'N'], + "TURB_ENG_FUEL_AVAILABLE:index": ["True if fuel is available", b'TURB ENG FUEL AVAILABLE:index', b'Bool', 'N'], + "TURB_ENG_REVERSE_NOZZLE_PERCENT:index": ["Percent thrust reverser nozzles deployed", b'TURB ENG REVERSE NOZZLE PERCENT:index', b'Percent', 'N'], + "TURB_ENG_VIBRATION:index": ["Engine vibration value", b'TURB ENG VIBRATION:index', b'Number', 'N'], + "ENG_FAILED:index": ["Failure flag", b'ENG FAILED:index', b'Number', 'N'], + "ENG_RPM_ANIMATION_PERCENT:index": ["Percent max rated rpm used for visual animation", b'ENG RPM ANIMATION PERCENT:index', b'Percent', 'N'], + "ENG_ON_FIRE:index": ["On fire state", b'ENG ON FIRE:index', b'Bool', 'Y'], + "ENG_FUEL_FLOW_BUG_POSITION:index": ["Fuel flow reference", b'ENG FUEL FLOW BUG POSITION:index', b'Pounds per hour', 'N'], + "PROP_RPM:index": ["Propeller rpm", b'PROP RPM:index', b'Rpm', 'Y'], + "PROP_MAX_RPM_PERCENT:index": ["Percent of max rated rpm", b'PROP MAX RPM PERCENT:index', b'Percent', 'N'], + "PROP_THRUST:index": ["Propeller thrust", b'PROP THRUST:index', b'Pounds', 'N'], + "PROP_BETA:index": ["Prop blade pitch angle", b'PROP BETA:index', b'Radians', 'N'], + "PROP_FEATHERING_INHIBIT:index": ["Feathering inhibit flag", b'PROP FEATHERING INHIBIT:index', b'Bool', 'N'], + "PROP_FEATHERED:index": ["Feathered state", b'PROP FEATHERED:index', b'Bool', 'N'], + "PROP_SYNC_DELTA_LEVER:index": ["Corrected prop correction input on slaved engine", b'PROP SYNC DELTA LEVER:index', b'Position', 'N'], + "PROP_AUTO_FEATHER_ARMED:index": ["Auto-feather armed state", b'PROP AUTO FEATHER ARMED:index', b'Bool', 'N'], + "PROP_FEATHER_SWITCH:index": ["Prop feather switch", b'PROP FEATHER SWITCH:index', b'Bool', 'N'], + "PANEL_AUTO_FEATHER_SWITCH:index": ["Auto-feather arming switch", b'PANEL AUTO FEATHER SWITCH:index', b'Bool', 'N'], + "PROP_SYNC_ACTIVE:index": ["True if prop sync is active", b'PROP SYNC ACTIVE:index', b'Bool', 'N'], + "PROP_DEICE_SWITCH:index": ["True if prop deice switch on", b'PROP DEICE SWITCH:index', b'Bool', 'N'], + "ENG_COMBUSTION": ["True if the engine is running", b'ENG COMBUSTION', b'Bool', 'N'], + "ENG_N1_RPM:index": ["Engine N1 rpm", b'ENG N1 RPM:index', b'Rpm (0 to 16384 = 0 to 100%)', 'N'], + "ENG_N2_RPM:index": ["Engine N2 rpm", b'ENG N2 RPM:index', b'Rpm(0 to 16384 = 0 to 100%)', 'N'], + "ENG_FUEL_FLOW_GPH:index": ["Engine fuel flow", b'ENG FUEL FLOW GPH:index', b'Gallons per hour', 'N'], + "ENG_FUEL_FLOW_PPH:index": ["Engine fuel flow", b'ENG FUEL FLOW PPH:index', b'Pounds per hour', 'N'], + "ENG_TORQUE:index": ["Torque", b'ENG TORQUE:index', b'Foot pounds', 'N'], + "ENG_ANTI_ICE:index": ["Anti-ice switch", b'ENG ANTI ICE:index', b'Bool', 'N'], + "ENG_PRESSURE_RATIO:index": ["Engine pressure ratio", b'ENG PRESSURE RATIO:index', b'Ratio (0-16384)', 'N'], + "ENG_EXHAUST_GAS_TEMPERATURE:index": ["Exhaust gas temperature", b'ENG EXHAUST GAS TEMPERATURE:index', b'Rankine', 'N'], + "ENG_EXHAUST_GAS_TEMPERATURE_GES:index": ["Governed engine setting", b'ENG EXHAUST GAS TEMPERATURE GES:index', b'Percent over 100', 'N'], + "ENG_CYLINDER_HEAD_TEMPERATURE:index": ["Engine cylinder head temperature", b'ENG CYLINDER HEAD TEMPERATURE:index', b'Rankine', 'N'], + "ENG_OIL_TEMPERATURE:index": ["Engine oil temperature", b'ENG OIL TEMPERATURE:index', b'Rankine', 'N'], + "ENG_OIL_PRESSURE:index": ["Engine oil pressure", b'ENG OIL PRESSURE:index', b'foot pounds', 'N'], + "ENG_OIL_QUANTITY:index": ["Engine oil quantitiy as a percentage of full capacity", b'ENG OIL QUANTITY:index', b'Percent over 100', 'N'], + "ENG_HYDRAULIC_PRESSURE:index": ["Engine hydraulic pressure", b'ENG HYDRAULIC PRESSURE:index', b'foot pounds', 'N'], + "ENG_HYDRAULIC_QUANTITY:index": ["Engine hydraulic fluid quantity, as a percentage of total capacity", b'ENG HYDRAULIC QUANTITY:index', b'Percent over 100', 'N'], + "ENG_MANIFOLD_PRESSURE:index": ["Engine manifold pressure.", b'ENG MANIFOLD PRESSURE:index', b'inHG.', 'N'], + "ENG_VIBRATION:index": ["Engine vibration", b'ENG VIBRATION:index', b'Number', 'N'], + "ENG_RPM_SCALER:index": ["Obsolete", b'ENG RPM SCALER:index', b'Scalar', 'N'], + "ENG_MAX_RPM": ["Maximum rpm", b'ENG MAX RPM', b'Rpm', 'N'], + "GENERAL_ENG_STARTER_ACTIVE": ["True if engine starter is active", b'GENERAL ENG STARTER ACTIVE', b'Bool', 'N'], + "GENERAL_ENG_FUEL_USED_SINCE_START": ["Fuel used since the engines were last started", b'GENERAL ENG FUEL USED SINCE START', b'Pounds', 'N'], + "TURB_ENG_PRIMARY_NOZZLE_PERCENT:index": ["Percent thrust of primary nozzle", b'TURB ENG PRIMARY NOZZLE PERCENT:index', b'Percent over 100', 'N'], + "TURB_ENG_IGNITION_SWITCH": ["True if the turbine engine ignition switch is on", b'TURB ENG IGNITION SWITCH', b'Bool', 'N'], + "TURB_ENG_MASTER_STARTER_SWITCH": ["True if the turbine engine master starter switch is on", b'TURB ENG MASTER STARTER SWITCH', b'Bool', 'N'], + "TURB_ENG_AFTERBURNER_STAGE_ACTIVE": ["The stage of the afterburner, or 0 if the afterburner is not active.", b'TURB ENG AFTERBURNER STAGE ACTIVE', b'Number', 'N'], + } + + class __FuelTankSelection(RequestHelper): + list = { + } + + class __AircraftFuelData(RequestHelper): + list = { + "FUEL_TANK_CENTER_LEVEL": ["Percent of maximum capacity", b'FUEL TANK CENTER LEVEL', b'Percent Over 100', 'Y'], + "FUEL_TANK_CENTER2_LEVEL": ["Percent of maximum capacity", b'FUEL TANK CENTER2 LEVEL', b'Percent Over 100', 'Y'], + "FUEL_TANK_CENTER3_LEVEL": ["Percent of maximum capacity", b'FUEL TANK CENTER3 LEVEL', b'Percent Over 100', 'Y'], + "FUEL_TANK_LEFT_MAIN_LEVEL": ["Percent of maximum capacity", b'FUEL TANK LEFT MAIN LEVEL', b'Percent Over 100', 'Y'], + "FUEL_TANK_LEFT_AUX_LEVEL": ["Percent of maximum capacity", b'FUEL TANK LEFT AUX LEVEL', b'Percent Over 100', 'Y'], + "FUEL_TANK_LEFT_TIP_LEVEL": ["Percent of maximum capacity", b'FUEL TANK LEFT TIP LEVEL', b'Percent Over 100', 'Y'], + "FUEL_TANK_RIGHT_MAIN_LEVEL": ["Percent of maximum capacity", b'FUEL TANK RIGHT MAIN LEVEL', b'Percent Over 100', 'Y'], + "FUEL_TANK_RIGHT_AUX_LEVEL": ["Percent of maximum capacity", b'FUEL TANK RIGHT AUX LEVEL', b'Percent Over 100', 'Y'], + "FUEL_TANK_RIGHT_TIP_LEVEL": ["Percent of maximum capacity", b'FUEL TANK RIGHT TIP LEVEL', b'Percent Over 100', 'Y'], + "FUEL_TANK_EXTERNAL1_LEVEL": ["Percent of maximum capacity", b'FUEL TANK EXTERNAL1 LEVEL', b'Percent Over 100', 'Y'], + "FUEL_TANK_EXTERNAL2_LEVEL": ["Percent of maximum capacity", b'FUEL TANK EXTERNAL2 LEVEL', b'Percent Over 100', 'Y'], + "FUEL_TANK_CENTER_CAPACITY": ["Maximum capacity in volume", b'FUEL TANK CENTER CAPACITY', b'Gallons', 'N'], + "FUEL_TANK_CENTER2_CAPACITY": ["Maximum capacity in volume", b'FUEL TANK CENTER2 CAPACITY', b'Gallons', 'N'], + "FUEL_TANK_CENTER3_CAPACITY": ["Maximum capacity in volume", b'FUEL TANK CENTER3 CAPACITY', b'Gallons', 'N'], + "FUEL_TANK_LEFT_MAIN_CAPACITY": ["Maximum capacity in volume", b'FUEL TANK LEFT MAIN CAPACITY', b'Gallons', 'N'], + "FUEL_TANK_LEFT_AUX_CAPACITY": ["Maximum capacity in volume", b'FUEL TANK LEFT AUX CAPACITY', b'Gallons', 'N'], + "FUEL_TANK_LEFT_TIP_CAPACITY": ["Maximum capacity in volume", b'FUEL TANK LEFT TIP CAPACITY', b'Gallons', 'N'], + "FUEL_TANK_RIGHT_MAIN_CAPACITY": ["Maximum capacity in volume", b'FUEL TANK RIGHT MAIN CAPACITY', b'Gallons', 'N'], + "FUEL_TANK_RIGHT_AUX_CAPACITY": ["Maximum capacity in volume", b'FUEL TANK RIGHT AUX CAPACITY', b'Gallons', 'N'], + "FUEL_TANK_RIGHT_TIP_CAPACITY": ["Maximum capacity in volume", b'FUEL TANK RIGHT TIP CAPACITY', b'Gallons', 'N'], + "FUEL_TANK_EXTERNAL1_CAPACITY": ["Maximum capacity in volume", b'FUEL TANK EXTERNAL1 CAPACITY', b'Gallons', 'N'], + "FUEL_TANK_EXTERNAL2_CAPACITY": ["Maximum capacity in volume", b'FUEL TANK EXTERNAL2 CAPACITY', b'Gallons', 'N'], + "FUEL_LEFT_CAPACITY": ["Maximum capacity in volume", b'FUEL LEFT CAPACITY', b'Gallons', 'N'], + "FUEL_RIGHT_CAPACITY": ["Maximum capacity in volume", b'FUEL RIGHT CAPACITY', b'Gallons', 'N'], + "FUEL_TANK_CENTER_QUANTITY": ["Current quantity in volume", b'FUEL TANK CENTER QUANTITY', b'Gallons', 'Y'], + "FUEL_TANK_CENTER2_QUANTITY": ["Current quantity in volume", b'FUEL TANK CENTER2 QUANTITY', b'Gallons', 'Y'], + "FUEL_TANK_CENTER3_QUANTITY": ["Current quantity in volume", b'FUEL TANK CENTER3 QUANTITY', b'Gallons', 'Y'], + "FUEL_TANK_LEFT_MAIN_QUANTITY": ["Current quantity in volume", b'FUEL TANK LEFT MAIN QUANTITY', b'Gallons', 'Y'], + "FUEL_TANK_LEFT_AUX_QUANTITY": ["Current quantity in volume", b'FUEL TANK LEFT AUX QUANTITY', b'Gallons', 'Y'], + "FUEL_TANK_LEFT_TIP_QUANTITY": ["Current quantity in volume", b'FUEL TANK LEFT TIP QUANTITY', b'Gallons', 'Y'], + "FUEL_TANK_RIGHT_MAIN_QUANTITY": ["Current quantity in volume", b'FUEL TANK RIGHT MAIN QUANTITY', b'Gallons', 'Y'], + "FUEL_TANK_RIGHT_AUX_QUANTITY": ["Current quantity in volume", b'FUEL TANK RIGHT AUX QUANTITY', b'Gallons', 'Y'], + "FUEL_TANK_RIGHT_TIP_QUANTITY": ["Current quantity in volume", b'FUEL TANK RIGHT TIP QUANTITY', b'Gallons', 'Y'], + "FUEL_TANK_EXTERNAL1_QUANTITY": ["Current quantity in volume", b'FUEL TANK EXTERNAL1 QUANTITY', b'Gallons', 'Y'], + "FUEL_TANK_EXTERNAL2_QUANTITY": ["Current quantity in volume", b'FUEL TANK EXTERNAL2 QUANTITY', b'Gallons', 'Y'], + "FUEL_LEFT_QUANTITY": ["Current quantity in volume", b'FUEL LEFT QUANTITY', b'Gallons', 'N'], + "FUEL_RIGHT_QUANTITY": ["Current quantity in volume", b'FUEL RIGHT QUANTITY', b'Gallons', 'N'], + "FUEL_TOTAL_QUANTITY": ["Current quantity in volume", b'FUEL TOTAL QUANTITY', b'Gallons', 'N'], + "FUEL_WEIGHT_PER_GALLON": ["Fuel weight per gallon", b'FUEL WEIGHT PER GALLON', b'Pounds', 'N'], + "FUEL_TANK_SELECTOR:index": ["Which tank is selected. See fuel tank list.", b'FUEL TANK SELECTOR:index', b'Enum', 'N'], + "FUEL_CROSS_FEED": ["Cross feed valve:; 0 = Closed; 1 = Open", b'FUEL CROSS FEED', b'Enum', 'N'], + "FUEL_TOTAL_CAPACITY": ["Total capacity of the aircraft", b'FUEL TOTAL CAPACITY', b'Gallons', 'N'], + "FUEL_SELECTED_QUANTITY_PERCENT": ["Percent or capacity for selected tank", b'FUEL SELECTED QUANTITY PERCENT', b'Percent Over 100', 'N'], + "FUEL_SELECTED_QUANTITY": ["Quantity of selected tank", b'FUEL SELECTED QUANTITY', b'Gallons', 'N'], + "FUEL_TOTAL_QUANTITY_WEIGHT": ["Current total fuel weight of the aircraft", b'FUEL TOTAL QUANTITY WEIGHT', b'Pounds', 'N'], + "NUM_FUEL_SELECTORS": ["Number of selectors on the aircraft", b'NUM FUEL SELECTORS', b'Number', 'N'], + "UNLIMITED_FUEL": ["Unlimited fuel flag", b'UNLIMITED FUEL', b'Bool', 'N'], + "ESTIMATED_FUEL_FLOW": ["Estimated fuel flow at cruise", b'ESTIMATED FUEL FLOW', b'Pounds per hour', 'N'], + } + + class __AircraftLightsData(RequestHelper): + list = { + "LIGHT_STROBE": ["Light switch state", b'LIGHT STROBE', b'Bool', 'N'], + "LIGHT_PANEL": ["Light switch state", b'LIGHT PANEL', b'Bool', 'N'], + "LIGHT_LANDING": ["Light switch state", b'LIGHT LANDING', b'Bool', 'N'], + "LIGHT_TAXI": ["Light switch state", b'LIGHT TAXI', b'Bool', 'N'], + "LIGHT_BEACON": ["Light switch state", b'LIGHT BEACON', b'Bool', 'N'], + "LIGHT_NAV": ["Light switch state", b'LIGHT NAV', b'Bool', 'N'], + "LIGHT_LOGO": ["Light switch state", b'LIGHT LOGO', b'Bool', 'N'], + "LIGHT_WING": ["Light switch state", b'LIGHT WING', b'Bool', 'N'], + "LIGHT_RECOGNITION": ["Light switch state", b'LIGHT RECOGNITION', b'Bool', 'N'], + "LIGHT_CABIN": ["Light switch state", b'LIGHT CABIN', b'Bool', 'N'], + "LIGHT_ON_STATES": ["Bit mask:; 0x0001: Nav; 0x0002: Beacon; 0x0004: Landing; 0x0008: Taxi; 0x0010: Strobe; 0x0020: Panel; 0x0040: Recognition; 0x0080: Wing; 0x0100: Logo; 0x0200: Cabin", b'LIGHT ON STATES', b'Mask', 'N'], + "LIGHT_STATES": ["Same as LIGHT ON STATES", b'LIGHT STATES', b'Mask', 'N'], + # "LANDING_LIGHT_PBH": ["Landing light pitch bank and heading", b'LANDING LIGHT PBH', b'SIMCONNECT_DATA_XYZ', 'N'], + "LIGHT_TAXI_ON": ["Return true if the light is on.", b'LIGHT TAXI ON', b'Bool', 'N'], + "LIGHT_STROBE_ON": ["Return true if the light is on.", b'LIGHT STROBE ON', b'Bool', 'N'], + "LIGHT_PANEL_ON": ["Return true if the light is on.", b'LIGHT PANEL ON', b'Bool', 'N'], + "LIGHT_RECOGNITION_ON": ["Return true if the light is on.", b'LIGHT RECOGNITION ON', b'Bool', 'N'], + "LIGHT_WING_ON": ["Return true if the light is on.", b'LIGHT WING ON', b'Bool', 'N'], + "LIGHT_LOGO_ON": ["Return true if the light is on.", b'LIGHT LOGO ON', b'Bool', 'N'], + "LIGHT_CABIN_ON": ["Return true if the light is on.", b'LIGHT CABIN ON', b'Bool', 'N'], + "LIGHT_HEAD_ON": ["Return true if the light is on.", b'LIGHT HEAD ON', b'Bool', 'N'], + "LIGHT_BRAKE_ON": ["Return true if the light is on.", b'LIGHT BRAKE ON', b'Bool', 'N'], + "LIGHT_NAV_ON": ["Return true if the light is on.", b'LIGHT NAV ON', b'Bool', 'N'], + "LIGHT_BEACON_ON": ["Return true if the light is on.", b'LIGHT BEACON ON', b'Bool', 'N'], + "LIGHT_LANDING_ON": ["Return true if the light is on.", b'LIGHT LANDING ON', b'Bool', 'N'], + } + + class __AircraftPositionandSpeedData(RequestHelper): + list = { + "GROUND_VELOCITY": ["Speed relative to the earths surface", b'GROUND VELOCITY', b'Knots', 'N'], + "TOTAL_WORLD_VELOCITY": ["Speed relative to the earths center", b'TOTAL WORLD VELOCITY', b'Feet per second', 'N'], + "VELOCITY_BODY_Z": ["True longitudinal speed, relative to aircraft axis", b'VELOCITY BODY Z', b'Feet per second', 'Y'], + "VELOCITY_BODY_X": ["True lateral speed, relative to aircraft axis", b'VELOCITY BODY X', b'Feet per second', 'Y'], + "VELOCITY_BODY_Y": ["True vertical speed, relative to aircraft axis", b'VELOCITY BODY Y', b'Feet per second', 'Y'], + "VELOCITY_WORLD_Z": ["Speed relative to earth, in North/South direction", b'VELOCITY WORLD Z', b'Feet per second', 'Y'], + "VELOCITY_WORLD_X": ["Speed relative to earth, in East/West direction", b'VELOCITY WORLD X', b'Feet per second', 'Y'], + "VELOCITY_WORLD_Y": ["Speed relative to earth, in vertical direction", b'VELOCITY WORLD Y', b'Feet per second', 'Y'], + "ACCELERATION_WORLD_X": ["Acceleration relative to earth, in east/west direction", b'ACCELERATION WORLD X', b'Feet per second squared', 'Y'], + "ACCELERATION_WORLD_Y": ["Acceleration relative to earch, in vertical direction", b'ACCELERATION WORLD Y', b'Feet per second squared', 'Y'], + "ACCELERATION_WORLD_Z": ["Acceleration relative to earth, in north/south direction", b'ACCELERATION WORLD Z', b'Feet per second squared', 'Y'], + "ACCELERATION_BODY_X": ["Acceleration relative to aircraft axix, in east/west direction", b'ACCELERATION BODY X', b'Feet per second squared', 'Y'], + "ACCELERATION_BODY_Y": ["Acceleration relative to aircraft axis, in vertical direction", b'ACCELERATION BODY Y', b'Feet per second squared', 'Y'], + "ACCELERATION_BODY_Z": ["Acceleration relative to aircraft axis, in north/south direction", b'ACCELERATION BODY Z', b'Feet per second squared', 'Y'], + "ROTATION_VELOCITY_BODY_X": ["Rotation relative to aircraft axis", b'ROTATION VELOCITY BODY X', b'Feet per second', 'Y'], + "ROTATION_VELOCITY_BODY_Y": ["Rotation relative to aircraft axis", b'ROTATION VELOCITY BODY Y', b'Feet per second', 'Y'], + "ROTATION_VELOCITY_BODY_Z": ["Rotation relative to aircraft axis", b'ROTATION VELOCITY BODY Z', b'Feet per second', 'Y'], + "RELATIVE_WIND_VELOCITY_BODY_X": ["Lateral speed relative to wind", b'RELATIVE WIND VELOCITY BODY X', b'Feet per second', 'N'], + "RELATIVE_WIND_VELOCITY_BODY_Y": ["Vertical speed relative to wind", b'RELATIVE WIND VELOCITY BODY Y', b'Feet per second', 'N'], + "RELATIVE_WIND_VELOCITY_BODY_Z": ["Longitudinal speed relative to wind", b'RELATIVE WIND VELOCITY BODY Z', b'Feet per second', 'N'], + "PLANE_ALT_ABOVE_GROUND": ["Altitude above the surface", b'PLANE ALT ABOVE GROUND', b'Feet', 'Y'], + "PLANE_LATITUDE": ["Latitude of aircraft, North is positive, South negative", b'PLANE LATITUDE', b'Degrees', 'Y'], + "PLANE_LONGITUDE": ["Longitude of aircraft, East is positive, West negative", b'PLANE LONGITUDE', b'Degrees', 'Y'], + "PLANE_ALTITUDE": ["Altitude of aircraft", b'PLANE ALTITUDE', b'Feet', 'Y'], + "PLANE_PITCH_DEGREES": ["Pitch angle, although the name mentions degrees the units used are radians", b'PLANE PITCH DEGREES', b'Radians', 'Y'], + "PLANE_BANK_DEGREES": ["Bank angle, although the name mentions degrees the units used are radians", b'PLANE BANK DEGREES', b'Radians', 'Y'], + "PLANE_HEADING_DEGREES_TRUE": ["Heading relative to true north, although the name mentions degrees the units used are radians", b'PLANE HEADING DEGREES TRUE', b'Radians', 'Y'], + "PLANE_HEADING_DEGREES_MAGNETIC": ["Heading relative to magnetic north, although the name mentions degrees the units used are radians", b'PLANE HEADING DEGREES MAGNETIC', b'Radians', 'Y'], + "MAGVAR": ["Magnetic variation", b'MAGVAR', b'Degrees', 'N'], + "GROUND_ALTITUDE": ["Altitude of surface", b'GROUND ALTITUDE', b'Meters', 'N'], + "SIM_ON_GROUND": ["On ground flag", b'SIM ON GROUND', b'Bool', 'N'], + "INCIDENCE_ALPHA": ["Angle of attack", b'INCIDENCE ALPHA', b'Radians', 'N'], + "INCIDENCE_BETA": ["Sideslip angle", b'INCIDENCE BETA', b'Radians', 'N'], + "WING_FLEX_PCT:index": ["The current wing flex. Different values can be set for each wing (for example, during banking). Set an index of 1 for the left wing, and 2 for the right wing.", b'WING FLEX PCT:index', b'Percent over 100', 'Y'], + # "STRUCT_LATLONALT": ["Returns the latitude, longitude and altitude of the user aircraft.", b'STRUCT LATLONALT', b'SIMCONNECT_DATA_LATLONALT', 'N'], + # "STRUCT_LATLONALTPBH": ["Returns the pitch, bank and heading of the user aircraft.", b'STRUCT LATLONALTPBH', b'SIMCONNECT_DATA_LATLONALT', 'N'], + # "STRUCT_SURFACE_RELATIVE_VELOCITY": ["The relative surface velocity.", b'STRUCT SURFACE RELATIVE VELOCITY', b'SIMCONNECT_DATA_XYZ', 'N'], + # "STRUCT_WORLDVELOCITY": ["The world velocity.", b'STRUCT WORLDVELOCITY', b'SIMCONNECT_DATA_XYZ', 'N'], + # "STRUCT_WORLD_ROTATION_VELOCITY": ["The world rotation velocity.", b'STRUCT WORLD ROTATION VELOCITY', b'SIMCONNECT_DATA_XYZ', 'N'], + # "STRUCT_BODY_VELOCITY": ["The object body velocity.", b'STRUCT BODY VELOCITY', b'SIMCONNECT_DATA_XYZ', 'N'], + # "STRUCT_BODY_ROTATION_VELOCITY": ["The body rotation velocity. Individual body rotation values are in the Aircraft Position and Speed section.", b'STRUCT BODY ROTATION VELOCITY', b'SIMCONNECT_DATA_XYZ', 'N'], + # "STRUCT_WORLD_ACCELERATION": ["The world acceleration for each axis. Individual world acceleration values are in the Aircraft Position and Speed section.", b'STRUCT WORLD ACCELERATION', b'SIMCONNECT_DATA_XYZ', 'N'], + # "STRUCT_ENGINE_POSITION:index": ["The engine position relative to the reference datum position for the aircraft.", b'STRUCT ENGINE POSITION:index', b'SIMCONNECT_DATA_XYZ.', 'N'], + # "STRUCT_EYEPOINT_DYNAMIC_ANGLE": ["The angle of the eyepoint view. Zero, zero, zero is straight ahead.", b'STRUCT EYEPOINT DYNAMIC ANGLE', b'SIMCONNECT_DATA_XYZ', 'N'], + # "STRUCT_EYEPOINT_DYNAMIC_OFFSET": ["A variable offset away from the EYEPOINT POSITION", b'STRUCT EYEPOINT DYNAMIC OFFSET', b'SIMCONNECT_DATA_XYZ', 'N'], + # "EYEPOINT_POSITION": ["The eyepoint position relative to the reference datum position for the aircraft.", b'EYEPOINT POSITION', b'SIMCONNECT_DATA_XYZ', 'N'], + } + + class __AircraftFlightInstrumentationData(RequestHelper): + list = { + "AIRSPEED_TRUE": ["True airspeed", b'AIRSPEED TRUE', b'Knots', 'Y'], + "AIRSPEED_INDICATED": ["Indicated airspeed", b'AIRSPEED INDICATED', b'Knots', 'Y'], + "AIRSPEED_TRUE_CALIBRATE": ["Angle of True calibration scale on airspeed indicator", b'AIRSPEED TRUE CALIBRATE', b'Degrees', 'Y'], + "AIRSPEED_BARBER_POLE": ["Redline airspeed (dynamic on some aircraft)", b'AIRSPEED BARBER POLE', b'Knots', 'N'], + "AIRSPEED_MACH": ["Current mach", b'AIRSPEED MACH', b'Mach', 'N'], + "VERTICAL_SPEED": ["Vertical speed indication", b'VERTICAL SPEED', b'feet/minute', 'Y'], + "MACH_MAX_OPERATE": ["Maximum design mach", b'MACH MAX OPERATE', b'Mach', 'N'], + "STALL_WARNING": ["Stall warning state", b'STALL WARNING', b'Bool', 'N'], + "OVERSPEED_WARNING": ["Overspeed warning state", b'OVERSPEED WARNING', b'Bool', 'N'], + "BARBER_POLE_MACH": ["Mach associated with maximum airspeed", b'BARBER POLE MACH', b'Mach', 'N'], + "INDICATED_ALTITUDE": ["Altimeter indication", b'INDICATED ALTITUDE', b'Feet', 'Y'], + "KOHLSMAN_SETTING_MB": ["Altimeter setting", b'KOHLSMAN SETTING MB', b'Millibars', 'Y'], + "KOHLSMAN_SETTING_HG": ["Altimeter setting", b'KOHLSMAN SETTING HG', b'inHg', 'N'], + "ATTITUDE_INDICATOR_PITCH_DEGREES": ["AI pitch indication", b'ATTITUDE INDICATOR PITCH DEGREES', b'Radians', 'N'], + "ATTITUDE_INDICATOR_BANK_DEGREES": ["AI bank indication", b'ATTITUDE INDICATOR BANK DEGREES', b'Radians', 'N'], + "ATTITUDE_BARS_POSITION": ["AI reference pitch reference bars", b'ATTITUDE BARS POSITION', b'Percent Over 100', 'N'], + "ATTITUDE_CAGE": ["AI caged state", b'ATTITUDE CAGE', b'Bool', 'N'], + "WISKEY_COMPASS_INDICATION_DEGREES": ["Magnetic compass indication", b'WISKEY COMPASS INDICATION DEGREES', b'Degrees', 'Y'], + "PLANE_HEADING_DEGREES_GYRO": ["Heading indicator (directional gyro) indication", b'PLANE HEADING DEGREES GYRO', b'Radians', 'Y'], + "HEADING_INDICATOR": ["Heading indicator (directional gyro) indication", b'HEADING INDICATOR', b'Radians', 'N'], + "GYRO_DRIFT_ERROR": ["Angular error of heading indicator", b'GYRO DRIFT ERROR', b'Radians', 'N'], + "DELTA_HEADING_RATE": ["Rate of turn of heading indicator", b'DELTA HEADING RATE', b'Radians per second', 'Y'], + "TURN_COORDINATOR_BALL": ["Turn coordinator ball position", b'TURN COORDINATOR BALL', b'Position', 'N'], + "ANGLE_OF_ATTACK_INDICATOR": ["AoA indication", b'ANGLE OF ATTACK INDICATOR', b'Radians', 'N'], + "RADIO_HEIGHT": ["Radar altitude", b'RADIO HEIGHT', b'Feet', 'N'], + "PARTIAL_PANEL_ADF": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL ADF', b'Enum', 'Y'], + "PARTIAL_PANEL_AIRSPEED": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL AIRSPEED', b'Enum', 'Y'], + "PARTIAL_PANEL_ALTIMETER": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL ALTIMETER', b'Enum', 'Y'], + "PARTIAL_PANEL_ATTITUDE": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL ATTITUDE', b'Enum', 'Y'], + "PARTIAL_PANEL_COMM": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL COMM', b'Enum', 'Y'], + "PARTIAL_PANEL_COMPASS": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL COMPASS', b'Enum', 'Y'], + "PARTIAL_PANEL_ELECTRICAL": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL ELECTRICAL', b'Enum', 'Y'], + "PARTIAL_PANEL_AVIONICS": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL AVIONICS', b'Enum', 'N'], + "PARTIAL_PANEL_ENGINE": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL ENGINE', b'Enum', 'Y'], + "PARTIAL_PANEL_FUEL_INDICATOR": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL FUEL INDICATOR', b'Enum', 'N'], + "PARTIAL_PANEL_HEADING": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL HEADING', b'Enum', 'Y'], + "PARTIAL_PANEL_VERTICAL_VELOCITY": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL VERTICAL VELOCITY', b'Enum', 'Y'], + "PARTIAL_PANEL_TRANSPONDER": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL TRANSPONDER', b'Enum', 'Y'], + "PARTIAL_PANEL_NAV": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL NAV', b'Enum', 'Y'], + "PARTIAL_PANEL_PITOT": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL PITOT', b'Enum', 'Y'], + "PARTIAL_PANEL_TURN_COORDINATOR": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL TURN COORDINATOR', b'Enum', 'N'], + "PARTIAL_PANEL_VACUUM": ["Gauge fail flag (0 = ok, 1 = fail, 2 = blank)", b'PARTIAL PANEL VACUUM', b'Enum', 'Y'], + "MAX_G_FORCE": ["Maximum G force attained", b'MAX G FORCE', b'Gforce', 'N'], + "MIN_G_FORCE": ["Minimum G force attained", b'MIN G FORCE', b'Gforce', 'N'], + "SUCTION_PRESSURE": ["Vacuum system suction pressure", b'SUCTION PRESSURE', b'inHg', 'Y'], + } + + class __AircraftAvionicsData(RequestHelper): + list = { + "AVIONICS_MASTER_SWITCH": ["Avionics switch state", b'AVIONICS MASTER SWITCH', b'Bool', 'N'], + "NAV_SOUND:index": ["Nav audio flag. Index of 1 or 2.", b'NAV SOUND:index', b'Bool', 'N'], + "DME_SOUND": ["DME audio flag", b'DME SOUND', b'Bool', 'N'], + "ADF_SOUND:index": ["ADF audio flag. Index of 0 or 1.", b'ADF SOUND:index', b'Bool', 'N'], + "MARKER_SOUND": ["Marker audio flag", b'MARKER SOUND', b'Bool', 'N'], + "COM_TRANSMIT:index": ["Audio panel com transmit state. Index of 1 or 2.", b'COM TRANSMIT:index', b'Bool', 'N'], + "COM_RECIEVE_ALL": ["Flag if all Coms receiving", b'COM RECIEVE ALL', b'Bool', 'N'], + "COM_ACTIVE_FREQUENCY:index": ["Com frequency. Index is 1 or 2.", b'COM ACTIVE FREQUENCY:index', b'MHz', 'N'], + "COM_STANDBY_FREQUENCY:index": ["Com standby frequency. Index is 1 or 2.", b'COM STANDBY FREQUENCY:index', b'MHz', 'N'], + "NAV_AVAILABLE:index": ["Flag if Nav equipped on aircraft", b'NAV AVAILABLE:index', b'Bool', 'N'], + "NAV_ACTIVE_FREQUENCY:index": ["Nav active frequency. Index is 1 or 2.", b'NAV ACTIVE FREQUENCY:index', b'MHz', 'N'], + "NAV_STANDBY_FREQUENCY:index": ["Nav standby frequency. Index is 1 or 2.", b'NAV STANDBY FREQUENCY:index', b'MHz', 'N'], + "NAV_SIGNAL:index": ["Nav signal strength", b'NAV SIGNAL:index', b'Number', 'N'], + "NAV_HAS_NAV:index": ["Flag if Nav has signal", b'NAV HAS NAV:index', b'Bool', 'N'], + "NAV_HAS_LOCALIZER:index": ["Flag if tuned station is a localizer", b'NAV HAS LOCALIZER:index', b'Bool', 'N'], + "NAV_HAS_DME:index": ["Flag if tuned station has a DME", b'NAV HAS DME:index', b'Bool', 'N'], + "NAV_HAS_GLIDE_SLOPE:index": ["Flag if tuned station has a glideslope", b'NAV HAS GLIDE SLOPE:index', b'Bool', 'N'], + "NAV_BACK_COURSE_FLAGS:index": ["Returns the following bit flags:; BIT0: 1=back course available; BIT1: 1=localizer tuned in; BIT2: 1=on course; BIT7: 1=station active", b'NAV BACK COURSE FLAGS:index', b'Flags', 'N'], + "NAV_MAGVAR:index": ["Magnetic variation of tuned nav station", b'NAV MAGVAR:index', b'Degrees', 'N'], + "NAV_RADIAL:index": ["Radial that aircraft is on", b'NAV RADIAL:index', b'Degrees', 'N'], + "NAV_RADIAL_ERROR:index": ["Difference between current radial and OBS tuned radial", b'NAV RADIAL ERROR:index', b'Degrees', 'N'], + "NAV_LOCALIZER:index": ["Localizer course heading", b'NAV LOCALIZER:index', b'Degrees', 'N'], + "NAV_GLIDE_SLOPE_ERROR:index": ["Difference between current position and glideslope angle. Note that this provides 32 bit floating point precision, rather than the 8 bit integer precision of NAV GSI.", b'NAV GLIDE SLOPE ERROR:index', b'Degrees', 'N'], + "NAV_CDI:index": ["CDI needle deflection (+/- 127)", b'NAV CDI:index', b'Number', 'N'], + "NAV_GSI:index": ["Glideslope needle deflection (+/- 119). Note that this provides only 8 bit precision, whereas NAV GLIDE SLOPE ERROR provides 32 bit floating point precision.", b'NAV GSI:index', b'Number', 'N'], + "NAV_GS_FLAG:index": ["Glideslope flag", b'NAV GS FLAG:index', b'Bool', 'N'], + "NAV_OBS:index": ["OBS setting. Index of 1 or 2.", b'NAV OBS:index', b'Degrees', 'N'], + "NAV_DME:index": ["DME distance", b'NAV DME:index', b'Nautical miles', 'N'], + "NAV_DMESPEED:index": ["DME speed", b'NAV DMESPEED:index', b'Knots', 'N'], + "ADF_ACTIVE_FREQUENCY:index": ["ADF frequency. Index of 1 or 2.", b'ADF ACTIVE FREQUENCY:index', b'Frequency ADF BCD32', 'N'], + "ADF_STANDBY_FREQUENCY:index": ["ADF standby frequency", b'ADF STANDBY FREQUENCY:index', b'Hz', 'N'], + "ADF_RADIAL:index": ["Current direction from NDB station", b'ADF RADIAL:index', b'Degrees', 'N'], + "ADF_SIGNAL:index": ["Signal strength", b'ADF SIGNAL:index', b'Number', 'N'], + "TRANSPONDER_CODE:index": ["4-digit code", b'TRANSPONDER CODE:index', b'BCO16', 'N'], + "MARKER_BEACON_STATE": ["Marker beacon state:; 0 = None; 1 = Outer; 2 = Middle; 3 = Inner", b'MARKER BEACON STATE', b'Enum', 'Y'], + "INNER_MARKER": ["Inner marker state", b'INNER MARKER', b'Bool', 'Y'], + "MIDDLE_MARKER": ["Middle marker state", b'MIDDLE MARKER', b'Bool', 'Y'], + "OUTER_MARKER": ["Outer marker state", b'OUTER MARKER', b'Bool', 'Y'], + "NAV_RAW_GLIDE_SLOPE:index": ["Glide slope angle", b'NAV RAW GLIDE SLOPE:index', b'Degrees', 'N'], + "ADF_CARD": ["ADF compass rose setting", b'ADF CARD', b'Degrees', 'N'], + "HSI_CDI_NEEDLE": ["Needle deflection (+/- 127)", b'HSI CDI NEEDLE', b'Number', 'N'], + "HSI_GSI_NEEDLE": ["Needle deflection (+/- 119)", b'HSI GSI NEEDLE', b'Number', 'N'], + "HSI_CDI_NEEDLE_VALID": ["Signal valid", b'HSI CDI NEEDLE VALID', b'Bool', 'N'], + "HSI_GSI_NEEDLE_VALID": ["Signal valid", b'HSI GSI NEEDLE VALID', b'Bool', 'N'], + "HSI_TF_FLAGS": ["Nav TO/FROM flag:; 0 = Off; 1 = TO; 2 = FROM", b'HSI TF FLAGS', b'Enum', 'N'], + "HSI_BEARING_VALID": ["This will return true if the HSI BEARING variable contains valid data.", b'HSI BEARING VALID', b'Bool', 'N'], + "HSI_BEARING": ["If the GPS DRIVES NAV1 variable is true and the HSI BEARING VALID variable is true, this variable contains the HSI needle bearing. If the GPS DRIVES NAV1 variable is false and the HSI BEARING VALID variable is true, this variable contains the ADF1 frequency.", b'HSI BEARING', b'Degrees', 'N'], + "HSI_HAS_LOCALIZER": ["Station is a localizer", b'HSI HAS LOCALIZER', b'Bool', 'N'], + "HSI_SPEED": ["DME/GPS speed", b'HSI SPEED', b'Knots', 'N'], + "HSI_DISTANCE": ["DME/GPS distance", b'HSI DISTANCE', b'Nautical miles', 'N'], + "GPS_POSITION_LAT": ["Current GPS latitude", b'GPS POSITION LAT', b'Degrees', 'N'], + "GPS_POSITION_LON": ["Current GPS longitude", b'GPS POSITION LON', b'Degrees', 'N'], + "GPS_POSITION_ALT": ["Current GPS altitude", b'GPS POSITION ALT', b'Meters', 'N'], + "GPS_MAGVAR": ["Current GPS magnetic variation", b'GPS MAGVAR', b'Radians', 'N'], + "GPS_IS_ACTIVE_FLIGHT_PLAN": ["Flight plan mode active", b'GPS IS ACTIVE FLIGHT PLAN', b'Bool', 'N'], + "GPS_IS_ACTIVE_WAY_POINT": ["Waypoint mode active", b'GPS IS ACTIVE WAY POINT', b'Bool', 'N'], + "GPS_IS_ARRIVED": ["Is flight plan destination reached", b'GPS IS ARRIVED', b'Bool', 'N'], + "GPS_IS_DIRECTTO_FLIGHTPLAN": ["Is Direct To Waypoint mode active", b'GPS IS DIRECTTO FLIGHTPLAN', b'Bool', 'N'], + "GPS_GROUND_SPEED": ["Current ground speed", b'GPS GROUND SPEED', b'Meters per second', 'N'], + "GPS_GROUND_TRUE_HEADING": ["Current true heading", b'GPS GROUND TRUE HEADING', b'Radians', 'N'], + "GPS_GROUND_MAGNETIC_TRACK": ["Current magnetic ground track", b'GPS GROUND MAGNETIC TRACK', b'Radians', 'N'], + "GPS_GROUND_TRUE_TRACK": ["Current true ground track", b'GPS GROUND TRUE TRACK', b'Radians', 'N'], + "GPS_WP_DISTANCE": ["Distance to waypoint", b'GPS WP DISTANCE', b'Meters', 'N'], + "GPS_WP_BEARING": ["Magnetic bearing to waypoint", b'GPS WP BEARING', b'Radians', 'N'], + "GPS_WP_TRUE_BEARING": ["True bearing to waypoint", b'GPS WP TRUE BEARING', b'Radians', 'N'], + "GPS_WP_CROSS_TRK": ["Cross track distance", b'GPS WP CROSS TRK', b'Meters', 'N'], + "GPS_WP_DESIRED_TRACK": ["Desired track to waypoint", b'GPS WP DESIRED TRACK', b'Radians', 'N'], + "GPS_WP_TRUE_REQ_HDG": ["Required true heading to waypoint", b'GPS WP TRUE REQ HDG', b'Radians', 'N'], + "GPS_WP_VERTICAL_SPEED": ["Vertical speed to waypoint", b'GPS WP VERTICAL SPEED', b'Meters per second', 'N'], + "GPS_WP_TRACK_ANGLE_ERROR": ["Tracking angle error to waypoint", b'GPS WP TRACK ANGLE ERROR', b'Radians', 'N'], + "GPS_ETE": ["Estimated time enroute to destination", b'GPS ETE', b'Seconds', 'N'], + "GPS_ETA": ["Estimated time of arrival at destination", b'GPS ETA', b'Seconds', 'N'], + "GPS_WP_NEXT_LAT": ["Latitude of next waypoint", b'GPS WP NEXT LAT', b'Degrees', 'N'], + "GPS_WP_NEXT_LON": ["Longitude of next waypoint", b'GPS WP NEXT LON', b'Degrees', 'N'], + "GPS_WP_NEXT_ALT": ["Altitude of next waypoint", b'GPS WP NEXT ALT', b'Meters', 'N'], + "GPS_WP_PREV_VALID": ["Is previous waypoint valid (i.e. current waypoint is not the first waypoint)", b'GPS WP PREV VALID', b'Bool', 'N'], + "GPS_WP_PREV_LAT": ["Latitude of previous waypoint", b'GPS WP PREV LAT', b'Degrees', 'N'], + "GPS_WP_PREV_LON": ["Longitude of previous waypoint", b'GPS WP PREV LON', b'Degrees', 'N'], + "GPS_WP_PREV_ALT": ["Altitude of previous waypoint", b'GPS WP PREV ALT', b'Meters', 'N'], + "GPS_WP_ETE": ["Estimated time enroute to waypoint", b'GPS WP ETE', b'Seconds', 'N'], + "GPS_WP_ETA": ["Estimated time of arrival at waypoint", b'GPS WP ETA', b'Seconds', 'N'], + "GPS_COURSE_TO_STEER": ["Suggested heading to steer (for autopilot)", b'GPS COURSE TO STEER', b'Radians', 'N'], + "GPS_FLIGHT_PLAN_WP_INDEX": ["Index of waypoint", b'GPS FLIGHT PLAN WP INDEX', b'Number', 'N'], + "GPS_FLIGHT_PLAN_WP_COUNT": ["Number of waypoints", b'GPS FLIGHT PLAN WP COUNT', b'Number', 'N'], + "GPS_IS_ACTIVE_WP_LOCKED": ["Is switching to next waypoint locked", b'GPS IS ACTIVE WP LOCKED', b'Bool', 'N'], + "GPS_IS_APPROACH_LOADED": ["Is approach loaded", b'GPS IS APPROACH LOADED', b'Bool', 'N'], + "GPS_IS_APPROACH_ACTIVE": ["Is approach mode active", b'GPS IS APPROACH ACTIVE', b'Bool', 'N'], + "GPS_APPROACH_IS_WP_RUNWAY": ["Waypoint is the runway", b'GPS APPROACH IS WP RUNWAY', b'Bool', 'N'], + "GPS_APPROACH_APPROACH_INDEX": ["Index of approach for given airport", b'GPS APPROACH APPROACH INDEX', b'Number', 'N'], + "GPS_APPROACH_TRANSITION_INDEX": ["Index of approach transition", b'GPS APPROACH TRANSITION INDEX', b'Number', 'N'], + "GPS_APPROACH_IS_FINAL": ["Is approach transition final approach segment", b'GPS APPROACH IS FINAL', b'Bool', 'N'], + "GPS_APPROACH_IS_MISSED": ["Is approach segment missed approach segment", b'GPS APPROACH IS MISSED', b'Bool', 'N'], + "GPS_APPROACH_TIMEZONE_DEVIATION": ["Deviation of local time from GMT", b'GPS APPROACH TIMEZONE DEVIATION', b'Seconds', 'N'], + "GPS_APPROACH_WP_INDEX": ["Index of current waypoint", b'GPS APPROACH WP INDEX', b'Number', 'N'], + "GPS_APPROACH_WP_COUNT": ["Number of waypoints", b'GPS APPROACH WP COUNT', b'Number', 'N'], + "GPS_DRIVES_NAV1": ["GPS is driving Nav 1 indicator", b'GPS DRIVES NAV1', b'Bool', 'N'], + "COM_RECEIVE_ALL": ["Toggles all COM radios to receive on", b'COM RECEIVE ALL', b'Bool', 'N'], + "COM_AVAILABLE": ["True if either COM1 or COM2 is available", b'COM AVAILABLE', b'Bool', 'N'], + "COM_TEST:index": ["Enter an index of 1 or 2. True if the COM system is working.", b'COM TEST:index', b'Bool', 'N'], + "TRANSPONDER_AVAILABLE": ["True if a transponder is available", b'TRANSPONDER AVAILABLE', b'Bool', 'N'], + "ADF_AVAILABLE": ["True if ADF is available", b'ADF AVAILABLE', b'Bool', 'N'], + "ADF_FREQUENCY:index": ["Legacy, use ADF ACTIVE FREQUENCY", b'ADF FREQUENCY:index', b'Frequency BCD16', 'N'], + "ADF_EXT_FREQUENCY:index": ["Legacy, use ADF ACTIVE FREQUENCY", b'ADF EXT FREQUENCY:index', b'Frequency BCD16', 'N'], + "ADF_IDENT": ["ICAO code", b'ADF IDENT', b'String', 'N'], + "ADF_NAME": ["Descriptive name", b'ADF NAME', b'String', 'N'], + "NAV_IDENT": ["ICAO code", b'NAV IDENT', b'String', 'N'], + "NAV_NAME": ["Descriptive name", b'NAV NAME', b'String', 'N'], + "NAV_CODES:index": ["Returns bit flags with the following meaning:; BIT7: 0= VOR 1= Localizer; BIT6: 1= glideslope available; BIT5: 1= no localizer backcourse; BIT4: 1= DME transmitter at glide slope transmitter; BIT3: 1= no nav signal available; BIT2: 1= voice available; BIT1: 1 = TACAN available; BIT0: 1= DME available", b'NAV CODES:index', b'Flags', 'N'], + "NAV_GLIDE_SLOPE": ["The glide slope gradient.", b'NAV GLIDE SLOPE', b'Number', 'N'], + "NAV_RELATIVE_BEARING_TO_STATION:index": ["Relative bearing to station", b'NAV RELATIVE BEARING TO STATION:index', b'Degrees', 'N'], + "SELECTED_DME": ["Selected DME", b'SELECTED DME', b'Number', 'N'], + "GPS_WP_NEXT_ID": ["ID of next GPS waypoint", b'GPS WP NEXT ID', b'String', 'N'], + "GPS_WP_PREV_ID": ["ID of previous GPS waypoint", b'GPS WP PREV ID', b'String', 'N'], + "GPS_TARGET_DISTANCE": ["Distance to target", b'GPS TARGET DISTANCE', b'Meters', 'N'], + "GPS_TARGET_ALTITUDE": ["Altitude of GPS target", b'GPS TARGET ALTITUDE', b'Meters', 'N'], + # "ADF_LATLONALT:index": ["Returns the latitude, longitude and altitude of the station the radio equipment is currently tuned to, or zeros if the radio is not tuned to any ADF station. Index of 1 or 2 for ADF 1 and ADF 2.", b'ADF LATLONALT:index', b'SIMCONNECT_DATA_LATLONALT', 'N'], + # "NAV_VOR_LATLONALT:index": ["Returns the VOR station latitude, longitude and altitude.", b'NAV VOR LATLONALT:index', b'SIMCONNECT_DATA_LATLONALT', 'N'], + # "NAV_GS_LATLONALT:index": ["Returns the glide slope.", b'NAV GS LATLONALT:index', b'SIMCONNECT_DATA_LATLONALT', 'N'], + # "NAV_DME_LATLONALT:index": ["Returns the DME station.", b'NAV DME LATLONALT:index', b'SIMCONNECT_DATA_LATLONALT', 'N'], + # "INNER_MARKER_LATLONALT": ["Returns the latitude, longitude and altitude of the inner marker of an approach to a runway, if the aircraft is within the required proximity, otherwise it will return zeros.", b'INNER MARKER LATLONALT', b'SIMCONNECT_DATA_LATLONALT', 'N'], + # "MIDDLE_MARKER_LATLONALT": ["Returns the latitude, longitude and altitude of the middle marker.", b'MIDDLE MARKER LATLONALT', b'SIMCONNECT_DATA_LATLONALT', 'N'], + # "OUTER_MARKER_LATLONALT": ["Returns the latitude, longitude and altitude of the outer marker.", b'OUTER MARKER LATLONALT', b'SIMCONNECT_DATA_LATLONALT', 'N'], + } + + class __AircraftControlsData(RequestHelper): + list = { + "YOKE_Y_POSITION": ["Percent control deflection fore/aft (for animation)", b'YOKE Y POSITION', b'Position', 'Y'], + "YOKE_X_POSITION": ["Percent control deflection left/right (for animation)", b'YOKE X POSITION', b'Position', 'Y'], + "RUDDER_PEDAL_POSITION": ["Percent rudder pedal deflection (for animation)", b'RUDDER PEDAL POSITION', b'Position', 'Y'], + "RUDDER_POSITION": ["Percent rudder input deflection", b'RUDDER POSITION', b'Position', 'Y'], + "ELEVATOR_POSITION": ["Percent elevator input deflection", b'ELEVATOR POSITION', b'Position', 'Y'], + "AILERON_POSITION": ["Percent aileron input left/right", b'AILERON POSITION', b'Position', 'Y'], + "ELEVATOR_TRIM_POSITION": ["Elevator trim deflection", b'ELEVATOR TRIM POSITION', b'Radians', 'Y'], + "ELEVATOR_TRIM_INDICATOR": ["Percent elevator trim (for indication)", b'ELEVATOR TRIM INDICATOR', b'Position', 'N'], + "ELEVATOR_TRIM_PCT": ["Percent elevator trim", b'ELEVATOR TRIM PCT', b'Percent Over 100', 'N'], + "BRAKE_LEFT_POSITION": ["Percent left brake", b'BRAKE LEFT POSITION', b'Position', 'Y'], + "BRAKE_RIGHT_POSITION": ["Percent right brake", b'BRAKE RIGHT POSITION', b'Position', 'Y'], + "BRAKE_INDICATOR": ["Brake on indication", b'BRAKE INDICATOR', b'Position', 'N'], + "BRAKE_PARKING_POSITION": ["Parking brake on", b'BRAKE PARKING POSITION', b'Position', 'Y'], + "BRAKE_PARKING_INDICATOR": ["Parking brake indicator", b'BRAKE PARKING INDICATOR', b'Bool', 'N'], + "SPOILERS_ARMED": ["Auto-spoilers armed", b'SPOILERS ARMED', b'Bool', 'N'], + "SPOILERS_HANDLE_POSITION": ["Spoiler handle position", b'SPOILERS HANDLE POSITION', b'Percent Over 100', 'Y'], + "SPOILERS_LEFT_POSITION": ["Percent left spoiler deflected", b'SPOILERS LEFT POSITION', b'Percent Over 100', 'N'], + "SPOILERS_RIGHT_POSITION": ["Percent right spoiler deflected", b'SPOILERS RIGHT POSITION', b'Percent Over 100', 'N'], + "FLAPS_HANDLE_PERCENT": ["Percent flap handle extended", b'FLAPS HANDLE PERCENT', b'Percent Over 100', 'N'], + "FLAPS_HANDLE_INDEX": ["Index of current flap position", b'FLAPS HANDLE INDEX', b'Number', 'Y'], + "FLAPS_NUM_HANDLE_POSITIONS": ["Number of flap positions", b'FLAPS NUM HANDLE POSITIONS', b'Number', 'N'], + "TRAILING_EDGE_FLAPS_LEFT_PERCENT": ["Percent left trailing edge flap extended", b'TRAILING EDGE FLAPS LEFT PERCENT', b'Percent Over 100', 'Y'], + "TRAILING_EDGE_FLAPS_RIGHT_PERCENT": ["Percent right trailing edge flap extended", b'TRAILING EDGE FLAPS RIGHT PERCENT', b'Percent Over 100', 'Y'], + "TRAILING_EDGE_FLAPS_LEFT_ANGLE": ["Angle left trailing edge flap extended. Use TRAILING EDGE FLAPS LEFT PERCENT to set a value.", b'TRAILING EDGE FLAPS LEFT ANGLE', b'Radians', 'N'], + "TRAILING_EDGE_FLAPS_RIGHT_ANGLE": ["Angle right trailing edge flap extended. Use TRAILING EDGE FLAPS RIGHT PERCENT to set a value.", b'TRAILING EDGE FLAPS RIGHT ANGLE', b'Radians', 'N'], + "LEADING_EDGE_FLAPS_LEFT_PERCENT": ["Percent left leading edge flap extended", b'LEADING EDGE FLAPS LEFT PERCENT', b'Percent Over 100', 'Y'], + "LEADING_EDGE_FLAPS_RIGHT_PERCENT": ["Percent right leading edge flap extended", b'LEADING EDGE FLAPS RIGHT PERCENT', b'Percent Over 100', 'Y'], + "LEADING_EDGE_FLAPS_LEFT_ANGLE": ["Angle left leading edge flap extended. Use LEADING EDGE FLAPS LEFT PERCENT to set a value.", b'LEADING EDGE FLAPS LEFT ANGLE', b'Radians', 'N'], + "LEADING_EDGE_FLAPS_RIGHT_ANGLE": ["Angle right leading edge flap extended. Use LEADING EDGE FLAPS RIGHT PERCENT to set a value.", b'LEADING EDGE FLAPS RIGHT ANGLE', b'Radians', 'N'], + "AILERON_LEFT_DEFLECTION": ["Angle deflection", b'AILERON LEFT DEFLECTION', b'Radians', 'N'], + "AILERON_LEFT_DEFLECTION_PCT": ["Percent deflection", b'AILERON LEFT DEFLECTION PCT', b'Percent Over 100', 'N'], + "AILERON_RIGHT_DEFLECTION": ["Angle deflection", b'AILERON RIGHT DEFLECTION', b'Radians', 'N'], + "AILERON_RIGHT_DEFLECTION_PCT": ["Percent deflection", b'AILERON RIGHT DEFLECTION PCT', b'Percent Over 100', 'N'], + "AILERON_AVERAGE_DEFLECTION": ["Angle deflection", b'AILERON AVERAGE DEFLECTION', b'Radians', 'N'], + "AILERON_TRIM": ["Angle deflection", b'AILERON TRIM', b'Radians', 'N'], + "AILERON_TRIM_PCT": ["Percent deflection", b'AILERON TRIM PCT', b'Percent Over 100', 'Y'], + "RUDDER_DEFLECTION": ["Angle deflection", b'RUDDER DEFLECTION', b'Radians', 'N'], + "RUDDER_DEFLECTION_PCT": ["Percent deflection", b'RUDDER DEFLECTION PCT', b'Percent Over 100', 'N'], + "RUDDER_TRIM": ["Angle deflection", b'RUDDER TRIM', b'Radians', 'N'], + "RUDDER_TRIM_PCT": ["Percent deflection", b'RUDDER TRIM PCT', b'Percent Over 100', 'Y'], + "FLAPS_AVAILABLE": ["True if flaps available", b'FLAPS AVAILABLE', b'Bool', 'N'], + "FLAP_DAMAGE_BY_SPEED": ["True if flagps are damaged by excessive speed", b'FLAP DAMAGE BY SPEED', b'Bool', 'N'], + "FLAP_SPEED_EXCEEDED": ["True if safe speed limit for flaps exceeded", b'FLAP SPEED EXCEEDED', b'Bool', 'N'], + "ELEVATOR_DEFLECTION": ["Angle deflection", b'ELEVATOR DEFLECTION', b'Radians', 'N'], + "ELEVATOR_DEFLECTION_PCT": ["Percent deflection", b'ELEVATOR DEFLECTION PCT', b'Percent Over 100', 'N'], + "ALTERNATE_STATIC_SOURCE_OPEN": ["Alternate static air source", b'ALTERNATE STATIC SOURCE OPEN', b'Bool', 'N'], + "AILERON_TRIM_PCT": ["The trim position of the ailerons. Zero is fully retracted.", b'AILERON TRIM PCT', b'Percent over 100', 'Y'], + "RUDDER_TRIM_PCT": ["The trim position of the rudder. Zero is no trim.", b'RUDDER TRIM PCT', b'Percent over 100', 'Y'], + "FOLDING_WING_HANDLE_POSITION": ["True if the folding wing handle is engaged.", b'FOLDING WING HANDLE POSITION', b'Bool', 'N'], + "FUEL_DUMP_SWITCH": ["If true the aircraft is dumping fuel at the rate set in the configuration file.", b'FUEL DUMP SWITCH', b'Bool', 'N'], + } + + class __AircraftAutopilotData(RequestHelper): + list = { + "AUTOPILOT_AVAILABLE": ["Available flag", b'AUTOPILOT AVAILABLE', b'Bool', 'N'], + "AUTOPILOT_MASTER": ["On/off flag", b'AUTOPILOT MASTER', b'Bool', 'N'], + "AUTOPILOT_NAV_SELECTED": ["Index of Nav radio selected", b'AUTOPILOT NAV SELECTED', b'Number', 'N'], + "AUTOPILOT_WING_LEVELER": ["Wing leveler active", b'AUTOPILOT WING LEVELER', b'Bool', 'N'], + "AUTOPILOT_NAV1_LOCK": ["Lateral nav mode active", b'AUTOPILOT NAV1 LOCK', b'Bool', 'N'], + "AUTOPILOT_HEADING_LOCK": ["Heading mode active", b'AUTOPILOT HEADING LOCK', b'Bool', 'N'], + "AUTOPILOT_HEADING_LOCK_DIR": ["Selected heading", b'AUTOPILOT HEADING LOCK DIR', b'Degrees', 'N'], + "AUTOPILOT_ALTITUDE_LOCK": ["Altitude hole active", b'AUTOPILOT ALTITUDE LOCK', b'Bool', 'N'], + "AUTOPILOT_ALTITUDE_LOCK_VAR": ["Selected altitude", b'AUTOPILOT ALTITUDE LOCK VAR', b'Feet', 'N'], + "AUTOPILOT_ATTITUDE_HOLD": ["Attitude hold active", b'AUTOPILOT ATTITUDE HOLD', b'Bool', 'N'], + "AUTOPILOT_GLIDESLOPE_HOLD": ["GS hold active", b'AUTOPILOT GLIDESLOPE HOLD', b'Bool', 'N'], + "AUTOPILOT_PITCH_HOLD_REF": ["Current reference pitch", b'AUTOPILOT PITCH HOLD REF', b'Radians', 'N'], + "AUTOPILOT_APPROACH_HOLD": ["Approach mode active", b'AUTOPILOT APPROACH HOLD', b'Bool', 'N'], + "AUTOPILOT_BACKCOURSE_HOLD": ["Back course mode active", b'AUTOPILOT BACKCOURSE HOLD', b'Bool', 'N'], + "AUTOPILOT_VERTICAL_HOLD_VAR": ["Selected vertical speed", b'AUTOPILOT VERTICAL HOLD VAR', b'Feet/minute', 'N'], + "AUTOPILOT_PITCH_HOLD": ["Set to True if the autopilot pitch hold has is engaged.", b'AUTOPILOT PITCH HOLD', b'Bool', 'N'], + "AUTOPILOT_FLIGHT_DIRECTOR_ACTIVE": ["Flight director active", b'AUTOPILOT FLIGHT DIRECTOR ACTIVE', b'Bool', 'N'], + "AUTOPILOT_FLIGHT_DIRECTOR_PITCH": ["Reference pitch angle", b'AUTOPILOT FLIGHT DIRECTOR PITCH', b'Radians', 'N'], + "AUTOPILOT_FLIGHT_DIRECTOR_BANK": ["Reference bank angle", b'AUTOPILOT FLIGHT DIRECTOR BANK', b'Radians', 'N'], + "AUTOPILOT_AIRSPEED_HOLD": ["Airspeed hold active", b'AUTOPILOT AIRSPEED HOLD', b'Bool', 'N'], + "AUTOPILOT_AIRSPEED_HOLD_VAR": ["Selected airspeed", b'AUTOPILOT AIRSPEED HOLD VAR', b'Knots', 'N'], + "AUTOPILOT_MACH_HOLD": ["Mach hold active", b'AUTOPILOT MACH HOLD', b'Bool', 'N'], + "AUTOPILOT_MACH_HOLD_VAR": ["Selected mach", b'AUTOPILOT MACH HOLD VAR', b'Number', 'N'], + "AUTOPILOT_YAW_DAMPER": ["Yaw damper active", b'AUTOPILOT YAW DAMPER', b'Bool', 'N'], + "AUTOPILOT_RPM_HOLD_VAR": ["Selected rpm", b'AUTOPILOT RPM HOLD VAR', b'Number', 'N'], + "AUTOPILOT_THROTTLE_ARM": ["Autothrottle armed", b'AUTOPILOT THROTTLE ARM', b'Bool', 'N'], + "AUTOPILOT_TAKEOFF_POWER_ACTIVE": ["Takeoff / Go Around power mode active", b'AUTOPILOT TAKEOFF POWER ACTIVE', b'Bool', 'N'], + "AUTOTHROTTLE_ACTIVE": ["Auto-throttle active", b'AUTOTHROTTLE ACTIVE', b'Bool', 'N'], + "AUTOPILOT_NAV1_LOCK": ["True if autopilot nav1 lock applied", b'AUTOPILOT NAV1 LOCK', b'Bool', 'N'], + "AUTOPILOT_VERTICAL_HOLD": ["True if autopilot vertical hold applied", b'AUTOPILOT VERTICAL HOLD', b'Bool', 'N'], + "AUTOPILOT_RPM_HOLD": ["True if autopilot rpm hold applied", b'AUTOPILOT RPM HOLD', b'Bool', 'N'], + "AUTOPILOT_MAX_BANK": ["True if autopilot max bank applied", b'AUTOPILOT MAX BANK', b'Radians', 'N'], + "FLY_BY_WIRE_ELAC_SWITCH": ["True if the fly by wire Elevators and Ailerons computer is on.", b'FLY BY WIRE ELAC SWITCH', b'Bool', 'N'], + "FLY_BY_WIRE_FAC_SWITCH": ["True if the fly by wire Flight Augmentation computer is on.", b'FLY BY WIRE FAC SWITCH', b'Bool', 'N'], + "FLY_BY_WIRE_SEC_SWITCH": ["True if the fly by wire Spoilers and Elevators computer is on.", b'FLY BY WIRE SEC SWITCH', b'Bool', 'N'], + "FLY_BY_WIRE_ELAC_FAILED": ["True if the Elevators and Ailerons computer has failed.", b'FLY BY WIRE ELAC FAILED', b'Bool', 'N'], + "FLY_BY_WIRE_FAC_FAILED": ["True if the Flight Augmentation computer has failed.", b'FLY BY WIRE FAC FAILED', b'Bool', 'N'], + "FLY_BY_WIRE_SEC_FAILED": ["True if the Spoilers and Elevators computer has failed.", b'FLY BY WIRE SEC FAILED', b'Bool', 'N'], + "AUTOPILOT_FLIGHT_LEVEL_CHANGE": ["True if autopilot FLC mode applied", b'AUTOPILOT FLIGHT LEVEL CHANGE', b'Bool', 'N'], + } + + class __AircraftLandingGearData(RequestHelper): + list = { + "IS_GEAR_RETRACTABLE": ["True if gear can be retracted", b'IS GEAR RETRACTABLE', b'Bool', 'N'], + "IS_GEAR_SKIS": ["True if landing gear is skis", b'IS GEAR SKIS', b'Bool', 'N'], + "IS_GEAR_FLOATS": ["True if landing gear is floats", b'IS GEAR FLOATS', b'Bool', 'N'], + "IS_GEAR_SKIDS": ["True if landing gear is skids", b'IS GEAR SKIDS', b'Bool', 'N'], + "IS_GEAR_WHEELS": ["True if landing gear is wheels", b'IS GEAR WHEELS', b'Bool', 'N'], + "GEAR_HANDLE_POSITION": ["True if gear handle is applied", b'GEAR HANDLE POSITION', b'Bool', 'Y'], + "GEAR_HYDRAULIC_PRESSURE": ["Gear hydraulic pressure", b'GEAR HYDRAULIC PRESSURE', b'psf', 'N'], + "TAILWHEEL_LOCK_ON": ["True if tailwheel lock applied", b'TAILWHEEL LOCK ON', b'Bool', 'N'], + "GEAR_CENTER_POSITION": ["Percent center gear extended", b'GEAR CENTER POSITION', b'Percent Over 100', 'Y'], + "GEAR_LEFT_POSITION": ["Percent left gear extended", b'GEAR LEFT POSITION', b'Percent Over 100', 'Y'], + "GEAR_RIGHT_POSITION": ["Percent right gear extended", b'GEAR RIGHT POSITION', b'Percent Over 100', 'Y'], + "GEAR_TAIL_POSITION": ["Percent tail gear extended", b'GEAR TAIL POSITION', b'Percent Over 100', 'N'], + "GEAR_AUX_POSITION": ["Percent auxiliary gear extended", b'GEAR AUX POSITION', b'Percent Over 100', 'N'], + "GEAR_POSITION:index": ["Position of landing gear:; 0 = unknown; 1 = up; 2 = down", b'GEAR POSITION:index', b'Enum', 'Y'], + "GEAR_ANIMATION_POSITION:index": ["Percent gear animation extended", b'GEAR ANIMATION POSITION:index', b'Number', 'N'], + "GEAR_TOTAL_PCT_EXTENDED": ["Percent total gear extended", b'GEAR TOTAL PCT EXTENDED', b'Percentage', 'N'], + "AUTO_BRAKE_SWITCH_CB": ["Auto brake switch position", b'AUTO BRAKE SWITCH CB', b'Number', 'N'], + "WATER_RUDDER_HANDLE_POSITION": ["Position of the water rudder handle (0 handle retracted, 100 rudder handle applied)", b'WATER RUDDER HANDLE POSITION', b'Percent Over 100', 'Y'], + "WATER_LEFT_RUDDER_EXTENDED": ["Percent extended", b'WATER LEFT RUDDER EXTENDED', b'Percentage', 'N'], + "WATER_RIGHT_RUDDER_EXTENDED": ["Percent extended", b'WATER RIGHT RUDDER EXTENDED', b'Percentage', 'N'], + "GEAR_CENTER_STEER_ANGLE": ["Center wheel angle, negative to the left, positive to the right.", b'GEAR CENTER STEER ANGLE', b'Percent Over 100', 'N'], + "GEAR_LEFT_STEER_ANGLE": ["Left wheel angle, negative to the left, positive to the right.", b'GEAR LEFT STEER ANGLE', b'Percent Over 100', 'N'], + "GEAR_RIGHT_STEER_ANGLE": ["Right wheel angle, negative to the left, positive to the right.", b'GEAR RIGHT STEER ANGLE', b'Percent Over 100', 'N'], + "GEAR_AUX_STEER_ANGLE": ["Aux wheel angle, negative to the left, positive to the right. The aux wheel is the fourth set of gear, sometimes used on helicopters.", b'GEAR AUX STEER ANGLE', b'Percent Over 100', 'N'], + "GEAR_STEER_ANGLE:index": ["Alternative method of getting the steer angle. Index is; 0 = center; 1 = left; 2 = right; 3 = aux", b'GEAR STEER ANGLE:index', b'Percent Over 100', 'N'], + "WATER_LEFT_RUDDER_STEER_ANGLE": ["Water left rudder angle, negative to the left, positive to the right.", b'WATER LEFT RUDDER STEER ANGLE', b'Percent Over 100', 'N'], + "WATER_RIGHT_RUDDER_STEER_ANGLE": ["Water right rudder angle, negative to the left, positive to the right.", b'WATER RIGHT RUDDER STEER ANGLE', b'Percent Over 100', 'N'], + "GEAR_CENTER_STEER_ANGLE_PCT": ["Center steer angle as a percentage", b'GEAR CENTER STEER ANGLE PCT', b'Percent Over 100', 'N'], + "GEAR_LEFT_STEER_ANGLE_PCT": ["Left steer angle as a percentage", b'GEAR LEFT STEER ANGLE PCT', b'Percent Over 100', 'N'], + "GEAR_RIGHT_STEER_ANGLE_PCT": ["Right steer angle as a percentage", b'GEAR RIGHT STEER ANGLE PCT', b'Percent Over 100', 'N'], + "GEAR_AUX_STEER_ANGLE_PCT": ["Aux steer angle as a percentage", b'GEAR AUX STEER ANGLE PCT', b'Percent Over 100', 'N'], + "GEAR_STEER_ANGLE_PCT:index": ["Alternative method of getting steer angle as a percentage. Index is; 0 = center; 1 = left; 2 = right; 3 = aux", b'GEAR STEER ANGLE PCT:index', b'Percent Over 100', 'N'], + "WATER_LEFT_RUDDER_STEER_ANGLE_PCT": ["Water left rudder angle as a percentage", b'WATER LEFT RUDDER STEER ANGLE PCT', b'Percent Over 100', 'N'], + "WATER_RIGHT_RUDDER_STEER_ANGLE_PCT": ["Water right rudder as a percentage", b'WATER RIGHT RUDDER STEER ANGLE PCT', b'Percent Over 100', 'N'], + "WHEEL_RPM:index": ["Wheel rpm. Index is; 0 = center; 1 = left; 2 = right; 3 = aux", b'WHEEL RPM:index', b'Rpm', 'N'], + "CENTER_WHEEL_RPM": ["Center landing gear rpm", b'CENTER WHEEL RPM', b'Rpm', 'N'], + "LEFT_WHEEL_RPM": ["Left landing gear rpm", b'LEFT WHEEL RPM', b'Rpm', 'N'], + "RIGHT_WHEEL_RPM": ["Right landing gear rpm", b'RIGHT WHEEL RPM', b'Rpm', 'N'], + "AUX_WHEEL_RPM": ["Rpm of fourth set of gear wheels.", b'AUX WHEEL RPM', b'Rpm', 'N'], + "WHEEL_ROTATION_ANGLE:index": ["Wheel rotation angle. Index is; 0 = center; 1 = left; 2 = right; 3 = aux", b'WHEEL ROTATION ANGLE:index', b'Radians', 'N'], + "CENTER_WHEEL_ROTATION_ANGLE": ["Center wheel rotation angle", b'CENTER WHEEL ROTATION ANGLE', b'Radians', 'N'], + "LEFT_WHEEL_ROTATION_ANGLE": ["Left wheel rotation angle", b'LEFT WHEEL ROTATION ANGLE', b'Radians', 'N'], + "RIGHT_WHEEL_ROTATION_ANGLE": ["Right wheel rotation angle", b'RIGHT WHEEL ROTATION ANGLE', b'Radians', 'N'], + "AUX_WHEEL_ROTATION_ANGLE": ["Aux wheel rotation angle", b'AUX WHEEL ROTATION ANGLE', b'Radians', 'N'], + "GEAR_EMERGENCY_HANDLE_POSITION": ["True if gear emergency handle applied", b'GEAR EMERGENCY HANDLE POSITION', b'Bool', 'N'], + "GEAR_WARNING": ["One of:; 0: unknown; 1: normal; 2: amphib", b'GEAR WARNING', b'Enum', 'N'], + "ANTISKID_BRAKES_ACTIVE": ["True if antiskid brakes active", b'ANTISKID BRAKES ACTIVE', b'Bool', 'N'], + "RETRACT_FLOAT_SWITCH": ["True if retract float switch on", b'RETRACT FLOAT SWITCH', b'Bool', 'N'], + "RETRACT_LEFT_FLOAT_EXTENDED": ["If aircraft has retractable floats.", b'RETRACT LEFT FLOAT EXTENDED', b'Percent', 'N'], + "RETRACT_RIGHT_FLOAT_EXTENDED": ["If aircraft has retractable floats.", b'RETRACT RIGHT FLOAT EXTENDED', b'Percent', 'N'], + "STEER_INPUT_CONTROL": ["Position of steering tiller", b'STEER INPUT CONTROL', b'Percent over 100', 'N'], + "GEAR_DAMAGE_BY_SPEED": ["True if gear has been damaged by excessive speed", b'GEAR DAMAGE BY SPEED', b'Bool', 'N'], + "GEAR_SPEED_EXCEEDED": ["True if safe speed limit for gear exceeded", b'GEAR SPEED EXCEEDED', b'Bool', 'N'], + "NOSEWHEEL_LOCK_ON": ["True if the nosewheel lock is engaged.", b'NOSEWHEEL LOCK ON', b'Bool', 'N'], + } + + class __AircraftEnvironmentData(RequestHelper): + list = { + "AMBIENT_DENSITY": ["Ambient density", b'AMBIENT DENSITY', b'Slugs per cubic feet', 'N'], + "AMBIENT_TEMPERATURE": ["Ambient temperature", b'AMBIENT TEMPERATURE', b'Celsius', 'N'], + "AMBIENT_PRESSURE": ["Ambient pressure", b'AMBIENT PRESSURE', b'inHg', 'N'], + "AMBIENT_WIND_VELOCITY": ["Wind velocity", b'AMBIENT WIND VELOCITY', b'Knots', 'N'], + "AMBIENT_WIND_DIRECTION": ["Wind direction", b'AMBIENT WIND DIRECTION', b'Degrees', 'N'], + "AMBIENT_WIND_X": ["Wind component in East/West direction.", b'AMBIENT WIND X', b'Meters per second', 'N'], + "AMBIENT_WIND_Y": ["Wind component in vertical direction.", b'AMBIENT WIND Y', b'Meters per second', 'N'], + "AMBIENT_WIND_Z": ["Wind component in North/South direction.", b'AMBIENT WIND Z', b'Meters per second', 'N'], + "STRUCT_AMBIENT_WIND": ["X (latitude), Y (vertical) and Z (longitude) components of the wind.", b'STRUCT AMBIENT WIND', b'Feet per second', 'N'], + "AIRCRAFT_WIND_X": ["Wind component in aircraft lateral axis", b'AIRCRAFT WIND X', b'Knots', 'N'], + "AIRCRAFT_WIND_Y": ["Wind component in aircraft vertical axis", b'AIRCRAFT WIND Y', b'Knots', 'N'], + "AIRCRAFT_WIND_Z": ["Wind component in aircraft longitudinal axis", b'AIRCRAFT WIND Z', b'Knots', 'N'], + "BAROMETER_PRESSURE": ["Barometric pressure", b'BAROMETER PRESSURE', b'Millibars', 'N'], + "SEA_LEVEL_PRESSURE": ["Barometric pressure at sea level", b'SEA LEVEL PRESSURE', b'Millibars', 'N'], + "TOTAL_AIR_TEMPERATURE": ["Total air temperature is the air temperature at the front of the aircraft where the ram pressure from the speed of the aircraft is taken into account.", b'TOTAL AIR TEMPERATURE', b'Celsius', 'N'], + "WINDSHIELD_RAIN_EFFECT_AVAILABLE": ["Is visual effect available on this aircraft", b'WINDSHIELD RAIN EFFECT AVAILABLE', b'Bool', 'N'], + "AMBIENT_IN_CLOUD": ["True if the aircraft is in a cloud.", b'AMBIENT IN CLOUD', b'Bool', 'N'], + "AMBIENT_VISIBILITY": ["Ambient visibility", b'AMBIENT VISIBILITY', b'Meters', 'N'], + "STANDARD_ATM_TEMPERATURE": ["Outside temperature on the standard ATM scale", b'STANDARD ATM TEMPERATURE', b'Rankine', 'N'], + } + + class __HelicopterSpecificData(RequestHelper): + list = { + "ROTOR_BRAKE_HANDLE_POS": ["Percent actuated", b'ROTOR BRAKE HANDLE POS', b'Percent Over 100', 'N'], + "ROTOR_BRAKE_ACTIVE": ["Active", b'ROTOR BRAKE ACTIVE', b'Bool', 'N'], + "ROTOR_CLUTCH_SWITCH_POS": ["Switch position", b'ROTOR CLUTCH SWITCH POS', b'Bool', 'N'], + "ROTOR_CLUTCH_ACTIVE": ["Active", b'ROTOR CLUTCH ACTIVE', b'Bool', 'N'], + "ROTOR_TEMPERATURE": ["Main rotor transmission temperature", b'ROTOR TEMPERATURE', b'Rankine', 'N'], + "ROTOR_CHIP_DETECTED": ["Chip detection", b'ROTOR CHIP DETECTED', b'Bool', 'N'], + "ROTOR_GOV_SWITCH_POS": ["Switch position", b'ROTOR GOV SWITCH POS', b'Bool', 'N'], + "ROTOR_GOV_ACTIVE": ["Active", b'ROTOR GOV ACTIVE', b'Bool', 'N'], + "ROTOR_LATERAL_TRIM_PCT": ["Trim percent", b'ROTOR LATERAL TRIM PCT', b'Percent Over 100', 'N'], + "ROTOR_RPM_PCT": ["Percent max rated rpm", b'ROTOR RPM PCT', b'Percent Over 100', 'N'], + "ENG_TURBINE_TEMPERATURE": ["Turbine temperature. Applies only to Bell helicopter.", b'ENG TURBINE TEMPERATURE', b'Celsius', 'N'], + "ENG_TORQUE_PERCENT:index": ["Torque. Returns main rotor torque for Bell helicopter, or the indexed rotor torque of other helicopters.", b'ENG TORQUE PERCENT:index', b'Percent scalar 16K (Ft/lbs * 16384)', 'N'], + "ENG_FUEL_PRESSURE": ["Fuel pressure. Applies only to Bell helicopter.", b'ENG FUEL PRESSURE', b'PSI', 'N'], + "ENG_ELECTRICAL_LOAD": ["Electrical load. Applies only to Bell helicopter.", b'ENG ELECTRICAL LOAD', b'Percent', 'N'], + "ENG_TRANSMISSION_PRESSURE": ["Transmission pressure. Applies only to Bell helicopter.", b'ENG TRANSMISSION PRESSURE', b'PSI', 'N'], + "ENG_TRANSMISSION_TEMPERATURE": ["Transmission temperature. Applies only to Bell helicopter.", b'ENG TRANSMISSION TEMPERATURE', b'Celsius', 'N'], + "ENG_ROTOR_RPM:index": ["Rotor rpm. Returns main rotor rpm for Bell helicopter, or the indexed rotor rpm of other helicopters.", b'ENG ROTOR RPM:index', b'Percent scalar 16K (Max rpm * 16384)', 'N'], + "COLLECTIVE_POSITION": ["The position of the helicopter's collective. 0 is fully up, 100 fully depressed.", b'COLLECTIVE POSITION', b'Percent over 100', 'N'], + } + + class __SlingsandHoists(RequestHelper): + list = { + "NUM_SLING_CABLES": ["The number of sling cables (not hoists) that are configured for the aircraft. Refer to the document Notes on Aircraft Systems.", b'NUM SLING CABLES', b'Number', 'N'], + "PAYLOAD_STATION_OBJECT:index": ["Places the named object at the payload station identified by the index (starting from 1). The string is the Container name (refer to the title property of Simulation Object Configuration Files).", b'PAYLOAD STATION OBJECT:index', b'String', 'Y- set only'], + "PAYLOAD_STATION_NUM_SIMOBJECTS:index": ["The number of objects at the payload station (indexed from 1).", b'PAYLOAD STATION NUM SIMOBJECTS:index', b'Number', 'N'], + "SLING_OBJECT_ATTACHED:index": ["If units are set as boolean, returns True if a sling object is attached. If units are set as a string, returns the container title of the object. There can be multiple sling positions, indexed from 1. The sling positions are set in the Aircraft Configuration File.", b'SLING OBJECT ATTACHED:index', b'Bool/String', 'N'], + "SLING_CABLE_BROKEN:index": ["True if the cable is broken.", b'SLING CABLE BROKEN:index', b'Bool', 'N'], + "SLING_CABLE_EXTENDED_LENGTH:index": ["The length of the cable extending from the aircraft.", b'SLING CABLE EXTENDED LENGTH:index', b'Feet', 'Y'], + "SLING_ACTIVE_PAYLOAD_STATION:index": ["The payload station (identified by the parameter) where objects will be placed from the sling (identified by the index).", b'SLING ACTIVE PAYLOAD STATION:index', b'Number', 'Y'], + "SLING_HOIST_PERCENT_DEPLOYED:index": ["The percentage of the full length of the sling cable deployed.", b'SLING HOIST PERCENT DEPLOYED:index', b'Percent over 100', 'N'], + "IS_ATTACHED_TO_SLING": ["Set to true if this object is attached to a sling.", b'IS ATTACHED TO SLING', b'Bool', 'N'], + } + + class __AircraftMiscellaneousSystemsData(RequestHelper): + list = { + "SMOKE_ENABLE": ["Set to True to activate the smoke system, if one is available (for example, on the Extra).", b'SMOKE ENABLE', b'Bool', 'Y'], + "SMOKESYSTEM_AVAILABLE": ["Smoke system available", b'SMOKESYSTEM AVAILABLE', b'Bool', 'N'], + "PITOT_HEAT": ["Pitot heat active", b'PITOT HEAT', b'Bool', 'N'], + "FOLDING_WING_LEFT_PERCENT": ["Left folding wing position, 100 is fully folded", b'FOLDING WING LEFT PERCENT', b'Percent Over 100', 'Y'], + "FOLDING_WING_RIGHT_PERCENT": ["Right folding wing position, 100 is fully folded", b'FOLDING WING RIGHT PERCENT', b'Percent Over 100', 'Y'], + "CANOPY_OPEN": ["Percent primary door/exit open", b'CANOPY OPEN', b'Percent Over 100', 'Y'], + "TAILHOOK_POSITION": ["Percent tail hook extended", b'TAILHOOK POSITION', b'Percent Over 100', 'Y'], + "EXIT_OPEN:index": ["Percent door/exit open", b'EXIT OPEN:index', b'Percent Over 100', 'Y'], + "STALL_HORN_AVAILABLE": ["True if stall alarm available", b'STALL HORN AVAILABLE', b'Bool', 'N'], + "ENGINE_MIXURE_AVAILABLE": ["True if engine mixture is available for prop engines. Obsolete value as mixture is always available. Spelling error in variable name.", b'ENGINE MIXURE AVAILABLE', b'Bool', 'N'], + "CARB_HEAT_AVAILABLE": ["True if carb heat available", b'CARB HEAT AVAILABLE', b'Bool', 'N'], + "SPOILER_AVAILABLE": ["True if spoiler system available", b'SPOILER AVAILABLE', b'Bool', 'N'], + "IS_TAIL_DRAGGER": ["True if the aircraft is a taildragger", b'IS TAIL DRAGGER', b'Bool', 'N'], + "STROBES_AVAILABLE": ["True if strobe lights are available", b'STROBES AVAILABLE', b'Bool', 'N'], + "TOE_BRAKES_AVAILABLE": ["True if toe brakes are available", b'TOE BRAKES AVAILABLE', b'Bool', 'N'], + "PUSHBACK_STATE": ["Type of pushback :; 0 = Straight; 1 = Left; 2 = Right", b'PUSHBACK STATE', b'Enum', 'Y'], + "ELECTRICAL_MASTER_BATTERY": ["Battery switch position", b'ELECTRICAL MASTER BATTERY', b'Bool', 'Y'], + "ELECTRICAL_TOTAL_LOAD_AMPS": ["Total load amps", b'ELECTRICAL TOTAL LOAD AMPS', b'Amperes', 'Y'], + "ELECTRICAL_BATTERY_LOAD": ["Battery load", b'ELECTRICAL BATTERY LOAD', b'Amperes', 'Y'], + "ELECTRICAL_BATTERY_VOLTAGE": ["Battery voltage", b'ELECTRICAL BATTERY VOLTAGE', b'Volts', 'Y'], + "ELECTRICAL_MAIN_BUS_VOLTAGE": ["Main bus voltage", b'ELECTRICAL MAIN BUS VOLTAGE', b'Volts', 'Y'], + "ELECTRICAL_MAIN_BUS_AMPS": ["Main bus current", b'ELECTRICAL MAIN BUS AMPS', b'Amperes', 'Y'], + "ELECTRICAL_AVIONICS_BUS_VOLTAGE": ["Avionics bus voltage", b'ELECTRICAL AVIONICS BUS VOLTAGE', b'Volts', 'Y'], + "ELECTRICAL_AVIONICS_BUS_AMPS": ["Avionics bus current", b'ELECTRICAL AVIONICS BUS AMPS', b'Amperes', 'Y'], + "ELECTRICAL_HOT_BATTERY_BUS_VOLTAGE": ["Voltage available when battery switch is turned off", b'ELECTRICAL HOT BATTERY BUS VOLTAGE', b'Volts', 'Y'], + "ELECTRICAL_HOT_BATTERY_BUS_AMPS": ["Current available when battery switch is turned off", b'ELECTRICAL HOT BATTERY BUS AMPS', b'Amperes', 'Y'], + "ELECTRICAL_BATTERY_BUS_VOLTAGE": ["Battery bus voltage", b'ELECTRICAL BATTERY BUS VOLTAGE', b'Volts', 'Y'], + "ELECTRICAL_BATTERY_BUS_AMPS": ["Battery bus current", b'ELECTRICAL BATTERY BUS AMPS', b'Amperes', 'Y'], + "ELECTRICAL_GENALT_BUS_VOLTAGE:index": ["Genalt bus voltage (takes engine index)", b'ELECTRICAL GENALT BUS VOLTAGE:index', b'Volts', 'Y'], + "ELECTRICAL_GENALT_BUS_AMPS:index": ["Genalt bus current (takes engine index)", b'ELECTRICAL GENALT BUS AMPS:index', b'Amperes', 'Y'], + "CIRCUIT_GENERAL_PANEL_ON": ["Is electrical power available to this circuit", b'CIRCUIT GENERAL PANEL ON', b'Bool', 'N'], + "CIRCUIT_FLAP_MOTOR_ON": ["Is electrical power available to this circuit", b'CIRCUIT FLAP MOTOR ON', b'Bool', 'N'], + "CIRCUIT_GEAR_MOTOR_ON": ["Is electrical power available to this circuit", b'CIRCUIT GEAR MOTOR ON', b'Bool', 'N'], + "CIRCUIT_AUTOPILOT_ON": ["Is electrical power available to this circuit", b'CIRCUIT AUTOPILOT ON', b'Bool', 'N'], + "CIRCUIT_AVIONICS_ON": ["Is electrical power available to this circuit", b'CIRCUIT AVIONICS ON', b'Bool', 'N'], + "CIRCUIT_PITOT_HEAT_ON": ["Is electrical power available to this circuit", b'CIRCUIT PITOT HEAT ON', b'Bool', 'N'], + "CIRCUIT_PROP_SYNC_ON": ["Is electrical power available to this circuit", b'CIRCUIT PROP SYNC ON', b'Bool', 'N'], + "CIRCUIT_AUTO_FEATHER_ON": ["Is electrical power available to this circuit", b'CIRCUIT AUTO FEATHER ON', b'Bool', 'N'], + "CIRCUIT_AUTO_BRAKES_ON": ["Is electrical power available to this circuit", b'CIRCUIT AUTO BRAKES ON', b'Bool', 'N'], + "CIRCUIT_STANDY_VACUUM_ON": ["Is electrical power available to this circuit", b'CIRCUIT STANDY VACUUM ON', b'Bool', 'N'], + "CIRCUIT_MARKER_BEACON_ON": ["Is electrical power available to this circuit", b'CIRCUIT MARKER BEACON ON', b'Bool', 'N'], + "CIRCUIT_GEAR_WARNING_ON": ["Is electrical power available to this circuit", b'CIRCUIT GEAR WARNING ON', b'Bool', 'N'], + "CIRCUIT_HYDRAULIC_PUMP_ON": ["Is electrical power available to this circuit", b'CIRCUIT HYDRAULIC PUMP ON', b'Bool', 'N'], + "HYDRAULIC_PRESSURE:index": ["Hydraulic system pressure. Indexes start at 1.", b'HYDRAULIC PRESSURE:index', b'Pound force per square foot', 'N'], + "HYDRAULIC_RESERVOIR_PERCENT:index": ["Hydraulic pressure changes will follow changes to this variable. Indexes start at 1.", b'HYDRAULIC RESERVOIR PERCENT:index', b'Percent Over 100', 'Y'], + "HYDRAULIC_SYSTEM_INTEGRITY": ["Percent system functional", b'HYDRAULIC SYSTEM INTEGRITY', b'Percent Over 100', 'N'], + "STRUCTURAL_DEICE_SWITCH": ["True if the aircraft structure deice switch is on", b'STRUCTURAL DEICE SWITCH', b'Bool', 'N'], + "APPLY_HEAT_TO_SYSTEMS": ["Used when too close to a fire.", b'APPLY HEAT TO SYSTEMS', b'Bool', 'Y'], + "DROPPABLE_OBJECTS_TYPE:index": ["The type of droppable object at the station number identified by the index.", b'DROPPABLE OBJECTS TYPE:index', b'String', 'Y'], + "DROPPABLE_OBJECTS_COUNT:index": ["The number of droppable objects at the station number identified by the index.", b'DROPPABLE OBJECTS COUNT:index', b'Number', 'N'], + } + + class __AircraftMiscellaneousData(RequestHelper): + list = { + "TOTAL_WEIGHT": ["Total weight of the aircraft", b'TOTAL WEIGHT', b'Pounds', 'N'], + "MAX_GROSS_WEIGHT": ["Maximum gross weight of the aircaft", b'MAX GROSS WEIGHT', b'Pounds', 'N'], + "EMPTY_WEIGHT": ["Empty weight of the aircraft", b'EMPTY WEIGHT', b'Pounds', 'N'], + "IS_USER_SIM": ["Is this the user loaded aircraft", b'IS USER SIM', b'Bool', 'N'], + "SIM_DISABLED": ["Is sim disabled", b'SIM DISABLED', b'Bool', 'Y'], + "G_FORCE": ["Current g force", b'G FORCE', b'GForce', 'Y'], + "ATC_HEAVY": ["Is this aircraft recognized by ATC as heavy", b'ATC HEAVY', b'Bool', 'Y'], + "AUTO_COORDINATION": ["Is auto-coordination active", b'AUTO COORDINATION', b'Bool', 'Y'], + "REALISM": ["General realism percent", b'REALISM', b'Number', 'Y'], + "TRUE_AIRSPEED_SELECTED": ["True if True Airspeed has been selected", b'TRUE AIRSPEED SELECTED', b'Bool', 'Y'], + "DESIGN_SPEED_VS0": ["Design speed at VS0", b'DESIGN SPEED VS0', b'Feet per second', 'N'], + "DESIGN_SPEED_VS1": ["Design speed at VS1", b'DESIGN SPEED VS1', b'Feet per second', 'N'], + "DESIGN_SPEED_VC": ["Design speed at VC", b'DESIGN SPEED VC', b'Feet per second', 'N'], + "MIN_DRAG_VELOCITY": ["Minimum drag velocity", b'MIN DRAG VELOCITY', b'Feet per second', 'N'], + "ESTIMATED_CRUISE_SPEED": ["Estimated cruise speed", b'ESTIMATED CRUISE SPEED', b'Feet per second', 'N'], + "CG_PERCENT": ["Longitudinal CG position as a percent of reference chord", b'CG PERCENT', b'Percent over 100', 'N'], + "CG_PERCENT_LATERAL": ["Lateral CG position as a percent of reference chord", b'CG PERCENT LATERAL', b'Percent over 100', 'N'], + "IS_SLEW_ACTIVE": ["True if slew is active", b'IS SLEW ACTIVE', b'Bool', 'Y'], + "IS_SLEW_ALLOWED": ["True if slew is enabled", b'IS SLEW ALLOWED', b'Bool', 'Y'], + "ATC_SUGGESTED_MIN_RWY_TAKEOFF": ["Suggested minimum runway length for takeoff. Used by ATC ", b'ATC SUGGESTED MIN RWY TAKEOFF', b'Feet', 'N'], + "ATC_SUGGESTED_MIN_RWY_LANDING": ["Suggested minimum runway length for landing. Used by ATC ", b'ATC SUGGESTED MIN RWY LANDING', b'Feet', 'N'], + "PAYLOAD_STATION_WEIGHT:index": ["Individual payload station weight", b'PAYLOAD STATION WEIGHT:index', b'Pounds', 'Y'], + "PAYLOAD_STATION_COUNT": ["Number of payload stations", b'PAYLOAD STATION COUNT', b'Number', 'N'], + "USER_INPUT_ENABLED": ["Is input allowed from the user", b'USER INPUT ENABLED', b'Bool', 'Y'], + "TYPICAL_DESCENT_RATE": ["Normal descent rate", b'TYPICAL DESCENT RATE', b'Feet per minute', 'N'], + "VISUAL_MODEL_RADIUS": ["Model radius", b'VISUAL MODEL RADIUS', b'Meters', 'N'], + "SIGMA_SQRT": ["Sigma sqrt", b'SIGMA SQRT', b'Number', 'N'], + "DYNAMIC_PRESSURE": ["Dynamic pressure", b'DYNAMIC PRESSURE', b'foot pounds', 'N'], + "TOTAL_VELOCITY": ["Velocity regardless of direction. For example, if a helicopter is ascending vertically at 100 fps, getting this variable will return 100.", b'TOTAL VELOCITY', b'Feet per second', 'N'], + "AIRSPEED_SELECT_INDICATED_OR_TRUE": ["The airspeed, whether true or indicated airspeed has been selected.", b'AIRSPEED SELECT INDICATED OR TRUE', b'Knots', 'N'], + "VARIOMETER_RATE": ["Variometer rate", b'VARIOMETER RATE', b'Feet per second', 'N'], + "VARIOMETER_SWITCH": ["True if the variometer switch is on", b'VARIOMETER SWITCH', b'Bool', 'N'], + "PRESSURE_ALTITUDE": ["Altitude reading", b'PRESSURE ALTITUDE', b'Meters', 'N'], + "MAGNETIC_COMPASS": ["Compass reading", b'MAGNETIC COMPASS', b'Degrees', 'N'], + "TURN_INDICATOR_RATE": ["Turn indicator reading", b'TURN INDICATOR RATE', b'Radians per second', 'N'], + "TURN_INDICATOR_SWITCH": ["True if turn indicator switch is on", b'TURN INDICATOR SWITCH', b'Bool', 'N'], + "YOKE_Y_INDICATOR": ["Yoke position in vertical direction", b'YOKE Y INDICATOR', b'Position', 'N'], + "YOKE_X_INDICATOR": ["Yoke position in horizontal direction", b'YOKE X INDICATOR', b'Position', 'N'], + "RUDDER_PEDAL_INDICATOR": ["Rudder pedal position", b'RUDDER PEDAL INDICATOR', b'Position', 'N'], + "BRAKE_DEPENDENT_HYDRAULIC_PRESSURE": ["Brake dependent hydraulic pressure reading", b'BRAKE DEPENDENT HYDRAULIC PRESSURE', b'foot pounds', 'N'], + "PANEL_ANTI_ICE_SWITCH": ["True if panel anti-ice switch is on", b'PANEL ANTI ICE SWITCH', b'Bool', 'N'], + "WING_AREA": ["Total wing area", b'WING AREA', b'Square feet', 'N'], + "WING_SPAN": ["Total wing span", b'WING SPAN', b'Feet', 'N'], + "BETA_DOT": ["Beta dot", b'BETA DOT', b'Radians per second', 'N'], + "LINEAR_CL_ALPHA": ["Linear CL alpha", b'LINEAR CL ALPHA', b'Per radian', 'N'], + "STALL_ALPHA": ["Stall alpha", b'STALL ALPHA', b'Radians', 'N'], + "ZERO_LIFT_ALPHA": ["Zero lift alpha", b'ZERO LIFT ALPHA', b'Radians', 'N'], + "CG_AFT_LIMIT": ["Aft limit of CG", b'CG AFT LIMIT', b'Percent over 100', 'N'], + "CG_FWD_LIMIT": ["Forward limit of CG", b'CG FWD LIMIT', b'Percent over 100', 'N'], + "CG_MAX_MACH": ["Max mach CG", b'CG MAX MACH', b'Machs', 'N'], + "CG_MIN_MACH": ["Min mach CG", b'CG MIN MACH', b'Machs', 'N'], + "PAYLOAD_STATION_NAME": ["Descriptive name for payload station", b'PAYLOAD STATION NAME', b'String', 'N'], + "ELEVON_DEFLECTION": ["Elevon deflection", b'ELEVON DEFLECTION', b'Radians', 'N'], + "EXIT_TYPE": ["One of:; 0: Main; 1: Cargo; 2: Emergency; 3: Unknown", b'EXIT TYPE', b'Enum', 'N'], + "EXIT_POSX": ["Position of exit relative to datum reference point", b'EXIT POSX', b'Feet', 'N'], + "EXIT_POSY": ["Position of exit relative to datum reference point", b'EXIT POSY', b'Feet', 'N'], + "EXIT_POSZ": ["Position of exit relative to datum reference point", b'EXIT POSZ', b'Feet', 'N'], + "DECISION_HEIGHT": ["Design decision height", b'DECISION HEIGHT', b'Feet', 'N'], + "DECISION_ALTITUDE_MSL": ["Design decision altitude above mean sea level", b'DECISION ALTITUDE MSL', b'Feet', 'N'], + "EMPTY_WEIGHT_PITCH_MOI": ["Empty weight pitch moment of inertia", b'EMPTY WEIGHT PITCH MOI', b'slug feet squared', 'N'], + "EMPTY_WEIGHT_ROLL_MOI": ["Empty weight roll moment of inertia", b'EMPTY WEIGHT ROLL MOI', b'slug feet squared', 'N'], + "EMPTY_WEIGHT_YAW_MOI": ["Empty weight yaw moment of inertia", b'EMPTY WEIGHT YAW MOI', b'slug feet squared', 'N'], + "EMPTY_WEIGHT_CROSS_COUPLED_MOI": ["Empty weigth cross coupled moment of inertia", b'EMPTY WEIGHT CROSS COUPLED MOI', b'slug feet squared', 'N'], + "TOTAL_WEIGHT_PITCH_MOI": ["Total weight pitch moment of inertia", b'TOTAL WEIGHT PITCH MOI', b'slug feet squared', 'N'], + "TOTAL_WEIGHT_ROLL_MOI": ["Total weight roll moment of inertia", b'TOTAL WEIGHT ROLL MOI', b'slug feet squared', 'N'], + "TOTAL_WEIGHT_YAW_MOI": ["Total weight yaw moment of inertia", b'TOTAL WEIGHT YAW MOI', b'slug feet squared', 'N'], + "TOTAL_WEIGHT_CROSS_COUPLED_MOI": ["Total weight cross coupled moment of inertia", b'TOTAL WEIGHT CROSS COUPLED MOI', b'slug feet squared', 'N'], + "WATER_BALLAST_VALVE": ["True if water ballast valve is available", b'WATER BALLAST VALVE', b'Bool', 'N'], + "MAX_RATED_ENGINE_RPM": ["Maximum rated rpm", b'MAX RATED ENGINE RPM', b'Rpm', 'N'], + "FULL_THROTTLE_THRUST_TO_WEIGHT_RATIO": ["Full throttle thrust to weight ratio", b'FULL THROTTLE THRUST TO WEIGHT RATIO', b'Number', 'N'], + "PROP_AUTO_CRUISE_ACTIVE": ["True if prop auto cruise active", b'PROP AUTO CRUISE ACTIVE', b'Bool', 'N'], + "PROP_ROTATION_ANGLE": ["Prop rotation angle", b'PROP ROTATION ANGLE', b'Radians', 'N'], + "PROP_BETA_MAX": ["Prop beta max", b'PROP BETA MAX', b'Radians', 'N'], + "PROP_BETA_MIN": ["Prop beta min", b'PROP BETA MIN', b'Radians', 'N'], + "PROP_BETA_MIN_REVERSE": ["Prop beta min reverse", b'PROP BETA MIN REVERSE', b'Radians', 'N'], + "FUEL_SELECTED_TRANSFER_MODE": ["One of:; -1: off; 0: auto; 1: forward; 2: aft; 3: manual", b'FUEL SELECTED TRANSFER MODE', b'Enum', 'N'], + "DROPPABLE_OBJECTS_UI_NAME": ["Descriptive name, used in User Interface dialogs, of a droppable object", b'DROPPABLE OBJECTS UI NAME', b'String', 'N'], + "MANUAL_FUEL_PUMP_HANDLE": ["Position of manual fuel pump handle. 100 is fully deployed.", b'MANUAL FUEL PUMP HANDLE', b'Percent over 100', 'N'], + "BLEED_AIR_SOURCE_CONTROL": ["One of:; 0: min; 1: auto; 2: off; 3: apu; 4: engines", b'BLEED AIR SOURCE CONTROL', b'Enum', 'N'], + "ELECTRICAL_OLD_CHARGING_AMPS": ["Legacy, use ELECTRICAL BATTERY LOAD", b'ELECTRICAL OLD CHARGING AMPS', b'Amps', 'N'], + "HYDRAULIC_SWITCH": ["True if hydraulic switch is on", b'HYDRAULIC SWITCH', b'Bool', 'N'], + "CONCORDE_VISOR_POSITION_PERCENT": ["0 = up, 1.0 = extended/down", b'CONCORDE VISOR POSITION PERCENT', b'Percent over 100', 'N'], + "CONCORDE_NOSE_ANGLE": ["0 = up", b'CONCORDE NOSE ANGLE', b'Radians', 'N'], + "REALISM_CRASH_WITH_OTHERS": ["True indicates crashing with other aircraft is possible.", b'REALISM CRASH WITH OTHERS', b'Bool', 'N'], + "REALISM_CRASH_DETECTION": ["True indicates crash detection is turned on.", b'REALISM CRASH DETECTION', b'Bool', 'N'], + "MANUAL_INSTRUMENT_LIGHTS": ["True if instrument lights are set manually", b'MANUAL INSTRUMENT LIGHTS', b'Bool', 'N'], + "PITOT_ICE_PCT": ["Amount of pitot ice. 100 is fully iced.", b'PITOT ICE PCT', b'Percent over 100', 'N'], + "SEMIBODY_LOADFACTOR_Y": ["Semibody loadfactor x and z are not supported.", b'SEMIBODY LOADFACTOR Y', b'Number', 'N'], + "SEMIBODY_LOADFACTOR_YDOT": ["Semibody loadfactory ydot", b'SEMIBODY LOADFACTOR YDOT', b'Per second', 'N'], + "RAD_INS_SWITCH": ["True if Rad INS switch on", b'RAD INS SWITCH', b'Bool', 'N'], + "SIMULATED_RADIUS": ["Simulated radius", b'SIMULATED RADIUS', b'Feet', 'N'], + "STRUCTURAL_ICE_PCT": ["Amount of ice on aircraft structure. 100 is fully iced.", b'STRUCTURAL ICE PCT', b'Percent over 100', 'N'], + "ARTIFICIAL_GROUND_ELEVATION": ["In case scenery is not loaded for AI planes, this variable can be used to set a default surface elevation.", b'ARTIFICIAL GROUND ELEVATION', b'Feet', 'N'], + "SURFACE_INFO_VALID": ["True indicates SURFACE CONDITION is meaningful.", b'SURFACE INFO VALID', b'Bool', 'N'], + "SURFACE_CONDITION": ["One of:; 0: Normal; 1: Wet; 2: Icy; 3: Snow", b'SURFACE CONDITION', b'Enum', 'N'], + "PUSHBACK_ANGLE": ["Pushback angle (the heading of the tug)", b'PUSHBACK ANGLE', b'Radians', 'N'], + "PUSHBACK_CONTACTX": ["The towpoint position, relative to the aircrafts datum reference point.", b'PUSHBACK CONTACTX', b'Feet', 'N'], + "PUSHBACK_CONTACTY": ["Pushback contact position in vertical direction", b'PUSHBACK CONTACTY', b'Feet', 'N'], + "PUSHBACK_CONTACTZ": ["Pushback contact position in fore/aft direction", b'PUSHBACK CONTACTZ', b'Feet', 'N'], + "PUSHBACK_WAIT": ["True if waiting for pushback.", b'PUSHBACK WAIT', b'Bool', 'N'], + "YAW_STRING_ANGLE": ["The yaw string angle. Yaw strings are attached to gliders as visible indicators of the yaw angle. An animation of this is not implemented in ESP.", b'YAW STRING ANGLE', b'Radians', 'N'], + "YAW_STRING_PCT_EXTENDED": ["Yaw string angle as a percentage", b'YAW STRING PCT EXTENDED', b'Percent over 100', 'N'], + "INDUCTOR_COMPASS_PERCENT_DEVIATION": ["Inductor compass deviation reading", b'INDUCTOR COMPASS PERCENT DEVIATION', b'Percent over 100', 'N'], + "INDUCTOR_COMPASS_HEADING_REF": ["Inductor compass heading", b'INDUCTOR COMPASS HEADING REF', b'Radians', 'N'], + "ANEMOMETER_PCT_RPM": ["Anemometer rpm as a percentage", b'ANEMOMETER PCT RPM', b'Percent over 100', 'N'], + "ROTOR_ROTATION_ANGLE": ["Main rotor rotation angle (helicopters only)", b'ROTOR ROTATION ANGLE', b'Radians', 'N'], + "DISK_PITCH_ANGLE": ["Main rotor pitch angle (helicopters only)", b'DISK PITCH ANGLE', b'Radians', 'N'], + "DISK_BANK_ANGLE": ["Main rotor bank angle (helicopters only)", b'DISK BANK ANGLE', b'Radians', 'N'], + "DISK_PITCH_PCT": ["Main rotor pitch percent (helicopters only)", b'DISK PITCH PCT', b'Percent over 100', 'N'], + "DISK_BANK_PCT": ["Main rotor bank percent (helicopters only)", b'DISK BANK PCT', b'Percent over 100', 'N'], + "DISK_CONING_PCT": ["Main rotor coning percent (helicopters only)", b'DISK CONING PCT', b'Percent over 100', 'N'], + # "NAV_VOR_LLAF64": ["Nav VOR latitude, longitude, altitude", b'NAV VOR LLAF64', b'SIMCONNECT_DATA_LATLONALT', 'N'], + # "NAV_GS_LLAF64": ["Nav GS latitude, longitude, altitude", b'NAV GS LLAF64', b'SIMCONNECT_DATA_LATLONALT', 'N'], + "STATIC_CG_TO_GROUND": ["Static CG to ground", b'STATIC CG TO GROUND', b'Feet', 'N'], + "STATIC_PITCH": ["Static pitch", b'STATIC PITCH', b'Radians', 'N'], + "CRASH_SEQUENCE": ["One of:; 0: off; 1: complete; 3: reset; 4: pause; 11: start", b'CRASH SEQUENCE', b'Enum', 'N'], + "CRASH_FLAG": ["One of:; 0: None; 2: Mountain; 4: General; 6: Building; 8: Splash; 10: Gear up; 12: Overstress; 14: Building; 16: Aircraft; 18: Fuel Truck", b'CRASH FLAG', b'Enum', 'N'], + "TOW_RELEASE_HANDLE": ["Position of tow release handle. 100 is fully deployed.", b'TOW RELEASE HANDLE', b'Percent over 100', 'N'], + "TOW_CONNECTION": ["True if a towline is connected to both tow plane and glider.", b'TOW CONNECTION', b'Bool', 'N'], + "APU_PCT_RPM": ["Auxiliary power unit rpm, as a percentage", b'APU PCT RPM', b'Percent over 100', 'N'], + "APU_PCT_STARTER": ["Auxiliary power unit starter, as a percentage", b'APU PCT STARTER', b'Percent over 100', 'N'], + "APU_VOLTS": ["Auxiliary power unit voltage", b'APU VOLTS', b'Volts', 'N'], + "APU_GENERATOR_SWITCH": ["True if APU generator switch on", b'APU GENERATOR SWITCH', b'Bool', 'N'], + "APU_GENERATOR_ACTIVE": ["True if APU generator active", b'APU GENERATOR ACTIVE', b'Bool', 'N'], + "APU_ON_FIRE_DETECTED": ["True if APU on fire", b'APU ON FIRE DETECTED', b'Bool', 'N'], + "PRESSURIZATION_CABIN_ALTITUDE": ["The current altitude of the cabin pressurization..", b'PRESSURIZATION CABIN ALTITUDE', b'Feet', 'N'], + "PRESSURIZATION_CABIN_ALTITUDE_GOAL": ["The set altitude of the cabin pressurization.", b'PRESSURIZATION CABIN ALTITUDE GOAL', b'Feet', 'N'], + "PRESSURIZATION_CABIN_ALTITUDE_RATE": ["The rate at which cabin pressurization changes.", b'PRESSURIZATION CABIN ALTITUDE RATE', b'Feet per second', 'N'], + "PRESSURIZATION_PRESSURE_DIFFERENTIAL": ["The difference in pressure between the set altitude pressurization and the current pressurization.", b'PRESSURIZATION PRESSURE DIFFERENTIAL', b'foot pounds', 'N'], + "PRESSURIZATION_DUMP_SWITCH": ["True if the cabin pressurization dump switch is on.", b'PRESSURIZATION DUMP SWITCH', b'Bool', 'N'], + "FIRE_BOTTLE_SWITCH": ["True if the fire bottle switch is on.", b'FIRE BOTTLE SWITCH', b'Bool', 'N'], + "FIRE_BOTTLE_DISCHARGED": ["True if the fire bottle is discharged.", b'FIRE BOTTLE DISCHARGED', b'Bool', 'N'], + "CABIN_NO_SMOKING_ALERT_SWITCH": ["True if the No Smoking switch is on.", b'CABIN NO SMOKING ALERT SWITCH', b'Bool', 'Y'], + "CABIN_SEATBELTS_ALERT_SWITCH": ["True if the Seatbelts switch is on.", b'CABIN SEATBELTS ALERT SWITCH', b'Bool', 'Y'], + "GPWS_WARNING": ["True if Ground Proximity Warning System installed.", b'GPWS WARNING', b'Bool', 'N'], + "GPWS_SYSTEM_ACTIVE": ["True if the Ground Proximity Warning System is active", b'GPWS SYSTEM ACTIVE', b'Bool', 'Y'], + "CRASH_FLAG": ["One of:; 0: None; 2: Mountain; 4: General; 6: Building; 8: Splash; 10: Gear up; 12: Overstress; 14: Building; 16: Aircraft; 18: Fuel Truck", b'CRASH FLAG', b'Enum', 'N'], + "IS_ALTITUDE_FREEZE_ON": ["True if the altitude of the aircraft is frozen.", b'IS ALTITUDE FREEZE ON', b'Bool', 'N'], + "IS_ATTITUDE_FREEZE_ON": ["True if the attitude (pitch, bank and heading) of the aircraft is frozen.", b'IS ATTITUDE FREEZE ON', b'Bool', 'N'], + # found in sdk + "SLING_HOOK_IN_PICKUP_MODE:index": [" ", b'SLING HOOK IN PICKUP MODE:index', b'Bool', 'N'], + "AI_TRAFFIC_STATE": [" ", b'AI TRAFFIC STATE', b'String', 'N'], + "AI_TRAFFIC_ASSIGNED_PARKING": [" ", b'AI TRAFFIC ASSIGNED PARKING', b'String', 'N'], + "RECIP_ENG_FUEL_TANKS_USED:index": [" ", b'RECIP ENG FUEL TANKS USED:index', b'Mask', 'Y'], + "TURB_ENG_TANKS_USED:index": [" ", b'TURB ENG TANKS USED:index', b'Mask', 'N'], + "ENG_TURBINE_TEMPERATURE:index": [" ", b'ENG TURBINE TEMPERATURE:index', b'Celsius', 'N'], + "ENG_ELECTRICAL_LOAD:index": [" ", b'ENG ELECTRICAL LOAD:index', b'Percent', 'N'], + "ENG_TRANSMISSION_PRESSURE:index": [" ", b'ENG TRANSMISSION PRESSURE:index', b'PSI', 'N'], + "ENG_TRANSMISSION_TEMPERATURE:index": [" ", b'ENG TRANSMISSION TEMPERATURE:index', b'Celsius', 'N'], + "SURFACE_TYPE": [" ", b'SURFACE TYPE', b'Enum', 'N'], + "COM_STATUS:index": [" ", b'COM STATUS:index', b'Enum', 'N'], + "NAV_TOFROM:index": [" ", b'NAV TOFROM:index', b'Enum', 'N'], + "GPS_APPROACH_MODE": [" ", b'GPS APPROACH MODE', b'Enum', 'N'], + "GPS_APPROACH_WP_TYPE": [" ", b'GPS APPROACH WP TYPE', b'Enum', 'N'], + "GPS_APPROACH_SEGMENT_TYPE": [" ", b'GPS APPROACH SEGMENT TYPE', b'Enum', 'N'], + "GPS_APPROACH_APPROACH_TYPE": [" ", b'GPS APPROACH APPROACH TYPE', b'Enum', 'N'], + "AMBIENT_PRECIP_STATE": [" ", b'AMBIENT PRECIP STATE', b'Mask', 'N'], + "CATEGORY": [" ", b'CATEGORY', b'String', 'N'], + "CONCORDE_VISOR_NOSE_HANDLE": [" ", b'CONCORDE VISOR NOSE HANDLE', b'Enum', 'N'], + "IS_LATITUDE_LONGITUDE_FREEZE_ON": [" ", b'IS LATITUDE LONGITUDE FREEZE ON', b'Bool', 'N'], + "TIME_OF_DAY": [" ", b'TIME OF DAY', b'Enum', 'N'], + "SIMULATION_RATE": [" ", b'SIMULATION RATE', b'Number', 'N'], + "UNITS_OF_MEASURE": [" ", b'UNITS OF MEASURE', b'Enum', 'N'], + } + + class __AircraftStringData(RequestHelper): + list = { + "ATC_TYPE": ["Type used by ATC", b'ATC TYPE', b'String', 'N'], + "ATC_MODEL": ["Model used by ATC", b'ATC MODEL', b'String', 'N'], + "ATC_ID": ["ID used by ATC", b'ATC ID', b'String', 'Y'], + "ATC_AIRLINE": ["Airline used by ATC", b'ATC AIRLINE', b'String', 'Y'], + "ATC_FLIGHT_NUMBER": ["Flight Number used by ATC", b'ATC FLIGHT NUMBER', b'String', 'Y'], + "TITLE": ["Title from aircraft.cfg", b'TITLE', b'String', 'N'], + "HSI_STATION_IDENT": ["Tuned station identifier", b'HSI STATION IDENT', b'String', 'N'], + "GPS_WP_PREV_ID": ["ID of previous GPS waypoint", b'GPS WP PREV ID', b'String', 'N'], + "GPS_WP_NEXT_ID": ["ID of next GPS waypoint", b'GPS WP NEXT ID', b'String', 'N'], + "GPS_APPROACH_AIRPORT_ID": ["ID of airport", b'GPS APPROACH AIRPORT ID', b'String', 'N'], + "GPS_APPROACH_APPROACH_ID": ["ID of approach", b'GPS APPROACH APPROACH ID', b'String', 'N'], + "GPS_APPROACH_TRANSITION_ID": ["ID of approach transition", b'GPS APPROACH TRANSITION ID', b'String', 'N'], + } + + class __AIControlledAircraft(RequestHelper): + list = { + "AI_DESIRED_SPEED": ["Desired speed of the AI object.", b'AI DESIRED SPEED', b'Knots', 'Y'], + # "AI_WAYPOINT_LIST": ["List of waypoints that an AI controlled object should follow.", b'AI WAYPOINT LIST', b'SIMCONNECT_DATA_WAYPOINT', 'Y'], + "AI_CURRENT_WAYPOINT": ["Current waypoint in the list", b'AI CURRENT WAYPOINT', b'Number', 'Y'], + "AI_DESIRED_HEADING": ["Desired heading of the AI object.", b'AI DESIRED HEADING', b'Degrees', 'Y'], + "AI_GROUNDTURNTIME": ["Time to make a 90 degree turn.", b'AI GROUNDTURNTIME', b'Seconds', 'Y'], + "AI_GROUNDCRUISESPEED": ["Cruising speed.", b'AI GROUNDCRUISESPEED', b'Knots', 'Y'], + "AI_GROUNDTURNSPEED": ["Turning speed.", b'AI GROUNDTURNSPEED', b'Knots', 'Y'], + "AI_TRAFFIC_ISIFR": ["Request whether this aircraft is IFR or VFR See Note 1.", b'AI TRAFFIC ISIFR', b'Boolean', 'N'], + "AI_TRAFFIC_CURRENT_AIRPORT": ["ICAO code of current airport. See Note 1.", b'AI TRAFFIC CURRENT AIRPORT', b'String', 'N'], + "AI_TRAFFIC_ASSIGNED_RUNWAY": ["Assigned runway name (for example: \"32R\"). See Note 1.", b'AI TRAFFIC ASSIGNED RUNWAY', b'String', 'N'], + "AI_TRAFFIC_FROMAIRPORT": ["ICAO code of the departure airport in the current schedule. See Note 2.", b'AI TRAFFIC FROMAIRPORT', b'String', 'N'], + "AI_TRAFFIC_TOAIRPORT": ["ICAO code of the destination airport in the current schedule. See Note 2.", b'AI TRAFFIC TOAIRPORT', b'String', 'N'], + "AI_TRAFFIC_ETD": ["Estimated time of departure for the current schedule entry, given as the number of seconds difference from the current simulation time. This can be negative if ETD is earlier than the current simulation time. See Note 2.", b'AI TRAFFIC ETD', b'Seconds', 'N'], + "AI_TRAFFIC_ETA": ["Estimated time of arrival for the current schedule entry, given as the number of seconds difference from the current simulated time. This can be negative if ETA is earlier than the current simulated time. See Note 2.", b'AI TRAFFIC ETA', b'Seconds', 'N'], + } + + class __CarrierOperations(RequestHelper): + list = { + "LAUNCHBAR_POSITION": ["Installed on aircraft before takeoff from a carrier catapult. Note that gear cannot retract with this extended. 100 = fully extended. Refer to the document Notes on Aircraft Systems.", b'LAUNCHBAR POSITION', b'Percent over 100', 'N'], + "LAUNCHBAR_SWITCH": ["If this is set to True the launch bar switch has been engaged.", b'LAUNCHBAR SWITCH', b'Bool', 'N'], + "LAUNCHBAR_HELD_EXTENDED": ["This will be True if the launchbar is fully extended, and can be used, for example, to change the color of an instrument light.", b'LAUNCHBAR HELD EXTENDED', b'Bool', 'N'], + "NUMBER_OF_CATAPULTS": ["Maximum of 4. A model can contain more than 4 catapults, but only the first four will be read and recognized by the simulation.", b'NUMBER OF CATAPULTS', b'Number', 'N'], + "CATAPULT_STROKE_POSITION:index": ["Catapults are indexed from 1. This value will be 0 before the catapult fires, and then up to 100 as the aircraft is propelled down the catapult. The aircraft may takeoff before the value reaches 100 (depending on the aircraft weight, power applied, and other factors), in which case this value will not be further updated. This value could be used to drive a bogie animation.", b'CATAPULT STROKE POSITION:index', b'Number', 'N'], + "HOLDBACK_BAR_INSTALLED": ["Holdback bars allow build up of thrust before takeoff from a catapult, and are installed by the deck crew of an aircraft carrier.", b'HOLDBACK BAR INSTALLED', b'Bool', 'N'], + "BLAST_SHIELD_POSITION:index": ["Indexed from 1, 100 is fully deployed, 0 flat on deck", b'BLAST SHIELD POSITION:index', b'Percent over 100', 'N'], + "CABLE_CAUGHT_BY_TAILHOOK": ["A number 1 through 4 for the cable number caught by the tailhook. Cable 1 is the one closest to the stern of the carrier. A value of 0 indicates no cable was caught.", b'CABLE CAUGHT BY TAILHOOK', b'Number', 'N'], + "TAILHOOK_HANDLE": ["True if the tailhook handle is engaged.", b'TAILHOOK HANDLE', b'Bool', 'N'], + "SURFACE_RELATIVE_GROUND_SPEED": ["The speed of the aircraft relative to the speed of the first surface directly underneath it. Use this to retrieve, for example, an aircraft's taxiing speed while it is moving on a moving carrier. It also applies to airborne aircraft, for example when a helicopter is successfully hovering above a moving ship, this value should be zero. The returned value will be the same as GROUND VELOCITY if the first surface beneath it is not moving.", b'SURFACE RELATIVE GROUND SPEED', b'Feet per second', 'N'], + } + + class __Racing(RequestHelper): + list = { + "RECIP_ENG_DETONATING:index": ["Indexed from 1. Set to True if the engine is detonating.", b'RECIP ENG DETONATING:index', b'Bool', 'N'], + "RECIP_ENG_CYLINDER_HEALTH:index": ["Index high 16 bits is engine number, low 16 cylinder number, both indexed from 1.", b'RECIP ENG CYLINDER HEALTH:index', b'Percent over 100', 'N'], + "RECIP_ENG_NUM_CYLINDERS": ["Indexed from 1. The number of engine cylinders.", b'RECIP ENG NUM CYLINDERS', b'Number', 'N'], + "RECIP_ENG_NUM_CYLINDERS_FAILED": ["Indexed from 1. The number of cylinders that have failed.", b'RECIP ENG NUM CYLINDERS FAILED', b'Number', 'N'], + "RECIP_ENG_ANTIDETONATION_TANK_VALVE:index": ["Indexed from 1, each engine can have one antidetonation tank. Installed on racing aircraft. Refer to the document Notes on Aircraft Systems.", b'RECIP ENG ANTIDETONATION TANK VALVE:index', b'Bool', 'Y'], + "RECIP_ENG_ANTIDETONATION_TANK_QUANTITY:index": ["Indexed from 1. Refer to the Mission Creation documentationfor the procedure for refilling tanks.", b'RECIP ENG ANTIDETONATION TANK QUANTITY:index', b'Gallons', 'Y'], + "RECIP_ENG_ANTIDETONATION_TANK_MAX_QUANTITY:index": ["Indexed from 1. This value set in the Aircraft Configuration File.", b'RECIP ENG ANTIDETONATION TANK MAX QUANTITY:index', b'Gallons', 'N'], + "RECIP_ENG_NITROUS_TANK_VALVE:index": ["Indexed from 1. Each engine can have one Nitrous fuel tank installed.", b'RECIP ENG NITROUS TANK VALVE:index', b'Bool', 'Y'], + "RECIP_ENG_NITROUS_TANK_QUANTITY:index": ["Indexed from 1. Refer to the Mission Creation documentationfor the procedure for refilling tanks.", b'RECIP ENG NITROUS TANK QUANTITY:index', b'Gallons', 'Y'], + "RECIP_ENG_NITROUS_TANK_MAX_QUANTITY:index": ["Indexed from 1. This value set in the Aircraft Configuration File.", b'RECIP ENG NITROUS TANK MAX QUANTITY:index', b'Gallons', 'N'], + } + + class __EnvironmentData(RequestHelper): + list = { + "ABSOLUTE_TIME": ["Time, as referenced from 12:00 AM January 1, 0000", b'ABSOLUTE TIME', b'Seconds', 'N'], + "ZULU_TIME": ["Greenwich Mean Time (GMT)", b'ZULU TIME', b'Seconds', 'N'], + "ZULU_DAY_OF_WEEK": ["GMT day of week", b'ZULU DAY OF WEEK', b'Number', 'N'], + "ZULU_DAY_OF_MONTH": ["GMT day of month", b'ZULU DAY OF MONTH', b'Number', 'N'], + "ZULU_MONTH_OF_YEAR": ["GMT month of year", b'ZULU MONTH OF YEAR', b'Number', 'N'], + "ZULU_DAY_OF_YEAR": ["GMT day of year", b'ZULU DAY OF YEAR', b'Number', 'N'], + "ZULU_YEAR": ["GMT year", b'ZULU YEAR', b'Number', 'N'], + "LOCAL_TIME": ["Local time", b'LOCAL TIME', b'Seconds', 'N'], + "LOCAL_DAY_OF_WEEK": ["Local day of week", b'LOCAL DAY OF WEEK', b'Number', 'N'], + "LOCAL_DAY_OF_MONTH": ["Local day of month", b'LOCAL DAY OF MONTH', b'Number', 'N'], + "LOCAL_MONTH_OF_YEAR": ["Local month of year", b'LOCAL MONTH OF YEAR', b'Number', 'N'], + "LOCAL_DAY_OF_YEAR": ["Local day of year", b'LOCAL DAY OF YEAR', b'Number', 'N'], + "LOCAL_YEAR": ["Local year", b'LOCAL YEAR', b'Number', 'N'], + "TIME_ZONE_OFFSET": ["Local time difference from GMT", b'TIME ZONE OFFSET', b'Seconds', 'N'], + } diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect/SimConnect-backup.dll b/templates/skills/msfs2020_control/dependencies/SimConnect/SimConnect-backup.dll new file mode 100644 index 00000000..da40b739 Binary files /dev/null and b/templates/skills/msfs2020_control/dependencies/SimConnect/SimConnect-backup.dll differ diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect/SimConnect.dll b/templates/skills/msfs2020_control/dependencies/SimConnect/SimConnect.dll new file mode 100644 index 00000000..3f2980cd Binary files /dev/null and b/templates/skills/msfs2020_control/dependencies/SimConnect/SimConnect.dll differ diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect/SimConnect.py b/templates/skills/msfs2020_control/dependencies/SimConnect/SimConnect.py new file mode 100644 index 00000000..5bb44652 --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect/SimConnect.py @@ -0,0 +1,482 @@ +from ctypes import * +from ctypes.wintypes import * +import logging +import time +from .Enum import * +from .Constants import * +from .Attributes import * +import os +import threading + +_library_path = os.path.splitext(os.path.abspath(__file__))[0] + '.dll' + +LOGGER = logging.getLogger(__name__) + + +def millis(): + return int(round(time.time() * 1000)) + + +class SimConnect: + + def IsHR(self, hr, value): + _hr = ctypes.HRESULT(hr) + return ctypes.c_ulong(_hr.value).value == value + + def handle_id_event(self, event): + uEventID = event.uEventID + if uEventID == self.dll.EventID.EVENT_SIM_START: + LOGGER.info("SIM START") + self.running = True + if uEventID == self.dll.EventID.EVENT_SIM_STOP: + LOGGER.info("SIM Stop") + self.running = False + # Unknow whay not reciving + if uEventID == self.dll.EventID.EVENT_SIM_PAUSED: + LOGGER.info("SIM Paused") + self.paused = True + if uEventID == self.dll.EventID.EVENT_SIM_UNPAUSED: + LOGGER.info("SIM Unpaused") + self.paused = False + + def handle_simobject_event(self, ObjData): + dwRequestID = ObjData.dwRequestID + if dwRequestID in self.Requests: + _request = self.Requests[dwRequestID] + rtype = _request.definitions[0][1].decode() + if 'string' in rtype.lower(): + pS = cast(ObjData.dwData, c_char_p) + _request.outData = pS.value + else: + _request.outData = cast( + ObjData.dwData, POINTER(c_double * len(_request.definitions)) + ).contents[0] + else: + LOGGER.warn("Event ID: %d Not Handled." % (dwRequestID)) + + def handle_exception_event(self, exc): + _exception = SIMCONNECT_EXCEPTION(exc.dwException).name + _unsendid = exc.UNKNOWN_SENDID + _sendid = exc.dwSendID + _unindex = exc.UNKNOWN_INDEX + _index = exc.dwIndex + + # request exceptions + for _reqin in self.Requests: + _request = self.Requests[_reqin] + if _request.LastID == _unsendid: + LOGGER.warn("%s: in %s" % (_exception, _request.definitions[0])) + return + + LOGGER.warn(_exception) + + def handle_state_event(self, pData): + print("I:", pData.dwInteger, "F:", pData.fFloat, "S:", pData.szString) + + # TODO: update callbackfunction to expand functions. + def my_dispatch_proc(self, pData, cbData, pContext): + # print("my_dispatch_proc") + dwID = pData.contents.dwID + if dwID == SIMCONNECT_RECV_ID.SIMCONNECT_RECV_ID_EVENT: + evt = cast(pData, POINTER(SIMCONNECT_RECV_EVENT)).contents + self.handle_id_event(evt) + + elif dwID == SIMCONNECT_RECV_ID.SIMCONNECT_RECV_ID_SYSTEM_STATE: + state = cast(pData, POINTER(SIMCONNECT_RECV_SYSTEM_STATE)).contents + self.handle_state_event(state) + + elif dwID == SIMCONNECT_RECV_ID.SIMCONNECT_RECV_ID_SIMOBJECT_DATA_BYTYPE: + pObjData = cast( + pData, POINTER(SIMCONNECT_RECV_SIMOBJECT_DATA_BYTYPE) + ).contents + self.handle_simobject_event(pObjData) + + elif dwID == SIMCONNECT_RECV_ID.SIMCONNECT_RECV_ID_OPEN: + LOGGER.info("SIM OPEN") + self.ok = True + + elif dwID == SIMCONNECT_RECV_ID.SIMCONNECT_RECV_ID_EXCEPTION: + exc = cast(pData, POINTER(SIMCONNECT_RECV_EXCEPTION)).contents + self.handle_exception_event(exc) + + elif dwID == SIMCONNECT_RECV_ID.SIMCONNECT_RECV_ID_ASSIGNED_OBJECT_ID: + pObjData = cast( + pData, POINTER(SIMCONNECT_RECV_ASSIGNED_OBJECT_ID) + ).contents + objectId = pObjData.dwObjectID + os.environ["SIMCONNECT_OBJECT_ID"] = str(objectId) + + elif (dwID == SIMCONNECT_RECV_ID.SIMCONNECT_RECV_ID_AIRPORT_LIST) or ( + dwID == SIMCONNECT_RECV_ID.SIMCONNECT_RECV_ID_WAYPOINT_LIST) or ( + dwID == SIMCONNECT_RECV_ID.SIMCONNECT_RECV_ID_NDB_LIST) or ( + dwID == SIMCONNECT_RECV_ID.SIMCONNECT_RECV_ID_VOR_LIST): + pObjData = cast( + pData, POINTER(SIMCONNECT_RECV_FACILITIES_LIST) + ).contents + dwRequestID = pObjData.dwRequestID + for _facilitie in self.Facilities: + if dwRequestID == _facilitie.REQUEST_ID.value: + _facilitie.parent.dump(pData) + _facilitie.dump(pData) + + elif dwID == SIMCONNECT_RECV_ID.SIMCONNECT_RECV_ID_QUIT: + self.quit = 1 + else: + LOGGER.debug("Received:", SIMCONNECT_RECV_ID(dwID)) + return + + def __init__(self, auto_connect=True, library_path=_library_path): + + self.Requests = {} + self.Facilities = [] + self.dll = SimConnectDll(library_path) + self.hSimConnect = HANDLE() + self.quit = 0 + self.ok = False + self.running = False + self.paused = False + self.DEFINITION_POS = None + self.DEFINITION_WAYPOINT = None + self.my_dispatch_proc_rd = self.dll.DispatchProc(self.my_dispatch_proc) + if auto_connect: + self.connect() + + def connect(self): + try: + err = self.dll.Open( + byref(self.hSimConnect), LPCSTR(b"Request Data"), None, 0, 0, 0 + ) + if self.IsHR(err, 0): + LOGGER.debug("Connected to Flight Simulator!") + # Request an event when the simulation starts + + # The user is in control of the aircraft + self.dll.SubscribeToSystemEvent( + self.hSimConnect, self.dll.EventID.EVENT_SIM_START, b"SimStart" + ) + # The user is navigating the UI. + self.dll.SubscribeToSystemEvent( + self.hSimConnect, self.dll.EventID.EVENT_SIM_STOP, b"SimStop" + ) + # Request a notification when the flight is paused + self.dll.SubscribeToSystemEvent( + self.hSimConnect, self.dll.EventID.EVENT_SIM_PAUSED, b"Paused" + ) + # Request a notification when the flight is un-paused. + self.dll.SubscribeToSystemEvent( + self.hSimConnect, self.dll.EventID.EVENT_SIM_UNPAUSED, b"Unpaused" + ) + self.timerThread = threading.Thread(target=self._run) + self.timerThread.daemon = True + self.timerThread.start() + while self.ok is False: + pass + except OSError: + LOGGER.debug("Did not find Flight Simulator running.") + raise ConnectionError("Did not find Flight Simulator running.") + + def _run(self): + while self.quit == 0: + try: + self.dll.CallDispatch(self.hSimConnect, self.my_dispatch_proc_rd, None) + time.sleep(.002) + except OSError as err: + print("OS error: {0}".format(err)) + + def exit(self): + self.quit = 1 + self.timerThread.join() + self.dll.Close(self.hSimConnect) + + def map_to_sim_event(self, name): + for m in self.dll.EventID: + if name.decode() == m.name: + LOGGER.debug("Already have event: ", m) + return m + + names = [m.name for m in self.dll.EventID] + [name.decode()] + self.dll.EventID = Enum(self.dll.EventID.__name__, names) + evnt = list(self.dll.EventID)[-1] + err = self.dll.MapClientEventToSimEvent(self.hSimConnect, evnt.value, name) + if self.IsHR(err, 0): + return evnt + else: + LOGGER.error("Error: MapToSimEvent") + return None + + def add_to_notification_group(self, group, evnt, bMaskable=False): + self.dll.AddClientEventToNotificationGroup( + self.hSimConnect, group, evnt, bMaskable + ) + + def request_data(self, _Request): + _Request.outData = None + self.dll.RequestDataOnSimObjectType( + self.hSimConnect, + _Request.DATA_REQUEST_ID.value, + _Request.DATA_DEFINITION_ID.value, + 0, + SIMCONNECT_SIMOBJECT_TYPE.SIMCONNECT_SIMOBJECT_TYPE_USER, + ) + temp = DWORD(0) + self.dll.GetLastSentPacketID(self.hSimConnect, temp) + _Request.LastID = temp.value + + def set_data(self, _Request): + rtype = _Request.definitions[0][1].decode() + if 'string' in rtype.lower(): + pyarr = bytearray(_Request.outData) + dataarray = (ctypes.c_char * len(pyarr))(*pyarr) + else: + pyarr = list([_Request.outData]) + dataarray = (ctypes.c_double * len(pyarr))(*pyarr) + + pObjData = cast( + dataarray, c_void_p + ) + err = self.dll.SetDataOnSimObject( + self.hSimConnect, + _Request.DATA_DEFINITION_ID.value, + SIMCONNECT_SIMOBJECT_TYPE.SIMCONNECT_SIMOBJECT_TYPE_USER, + 0, + 0, + sizeof(ctypes.c_double) * len(pyarr), + pObjData + ) + if self.IsHR(err, 0): + # LOGGER.debug("Request Sent") + return True + else: + return False + + def get_data(self, _Request): + self.request_data(_Request) + # self.run() + attemps = 0 + while _Request.outData is None and attemps < _Request.attemps: + # self.run() + time.sleep(.01) + attemps += 1 + if _Request.outData is None: + return False + + return True + + def send_event(self, evnt, data=DWORD(0)): + err = self.dll.TransmitClientEvent( + self.hSimConnect, + SIMCONNECT_OBJECT_ID_USER, + evnt.value, + data, + SIMCONNECT_GROUP_PRIORITY_HIGHEST, + DWORD(16), + ) + + if self.IsHR(err, 0): + # LOGGER.debug("Event Sent") + return True + else: + return False + + def new_def_id(self): + _name = "Definition" + str(len(list(self.dll.DATA_DEFINITION_ID))) + names = [m.name for m in self.dll.DATA_DEFINITION_ID] + [_name] + + self.dll.DATA_DEFINITION_ID = Enum(self.dll.DATA_DEFINITION_ID.__name__, names) + DEFINITION_ID = list(self.dll.DATA_DEFINITION_ID)[-1] + return DEFINITION_ID + + def new_request_id(self): + name = "Request" + str(len(self.dll.DATA_REQUEST_ID)) + names = [m.name for m in self.dll.DATA_REQUEST_ID] + [name] + self.dll.DATA_REQUEST_ID = Enum(self.dll.DATA_REQUEST_ID.__name__, names) + REQUEST_ID = list(self.dll.DATA_REQUEST_ID)[-1] + + return REQUEST_ID + + def add_waypoints(self, _waypointlist): + if self.DEFINITION_WAYPOINT is None: + self.DEFINITION_WAYPOINT = self.new_def_id() + err = self.dll.AddToDataDefinition( + self.hSimConnect, + self.DEFINITION_WAYPOINT.value, + b'AI WAYPOINT LIST', + b'number', + SIMCONNECT_DATATYPE.SIMCONNECT_DATATYPE_WAYPOINT, + 0, + SIMCONNECT_UNUSED, + ) + pyarr = [] + for waypt in _waypointlist: + for e in waypt._fields_: + pyarr.append(getattr(waypt, e[0])) + dataarray = (ctypes.c_double * len(pyarr))(*pyarr) + pObjData = cast( + dataarray, c_void_p + ) + sx = int(sizeof(ctypes.c_double) * (len(pyarr) / len(_waypointlist))) + return + hr = self.dll.SetDataOnSimObject( + self.hSimConnect, + self.DEFINITION_WAYPOINT.value, + SIMCONNECT_OBJECT_ID_USER, + 0, + len(_waypointlist), + sx, + pObjData + ) + if self.IsHR(err, 0): + return True + else: + return False + + def set_pos( + self, + _Altitude, + _Latitude, + _Longitude, + _Airspeed, + _Pitch=0.0, + _Bank=0.0, + _Heading=0, + _OnGround=0, + ): + Init = SIMCONNECT_DATA_INITPOSITION() + Init.Altitude = _Altitude + Init.Latitude = _Latitude + Init.Longitude = _Longitude + Init.Pitch = _Pitch + Init.Bank = _Bank + Init.Heading = _Heading + Init.OnGround = _OnGround + Init.Airspeed = _Airspeed + + if self.DEFINITION_POS is None: + self.DEFINITION_POS = self.new_def_id() + err = self.dll.AddToDataDefinition( + self.hSimConnect, + self.DEFINITION_POS.value, + b'Initial Position', + b'', + SIMCONNECT_DATATYPE.SIMCONNECT_DATATYPE_INITPOSITION, + 0, + SIMCONNECT_UNUSED, + ) + + hr = self.dll.SetDataOnSimObject( + self.hSimConnect, + self.DEFINITION_POS.value, + SIMCONNECT_OBJECT_ID_USER, + 0, + 0, + sizeof(Init), + pointer(Init) + ) + if self.IsHR(hr, 0): + return True + else: + return False + + def load_flight(self, flt_path): + hr = self.dll.FlightLoad(self.hSimConnect, flt_path.encode()) + if self.IsHR(hr, 0): + return True + else: + return False + + def load_flight_plan(self, pln_path): + hr = self.dll.FlightPlanLoad(self.hSimConnect, pln_path.encode()) + if self.IsHR(hr, 0): + return True + else: + return False + + def save_flight( + self, + flt_path, + flt_title, + flt_description, + flt_mission_type='FreeFlight', + flt_mission_location='Custom departure', + flt_original_flight='', + flt_flight_type='NORMAL'): + hr = self.dll.FlightSave(self.hSimConnect, flt_path.encode(), flt_title.encode(), flt_description.encode(), 0) + if not self.IsHR(hr, 0): + return False + + dicp = self.flight_to_dic(flt_path) + if 'MissionType' not in dicp['Main']: + dicp['Main']['MissionType'] = flt_mission_type + + if 'MissionLocation' not in dicp['Main']: + dicp['Main']['MissionLocation'] = flt_mission_location + + if 'FlightType' not in dicp['Main']: + dicp['Main']['FlightType'] = flt_flight_type + + if 'OriginalFlight' not in dicp['Main']: + dicp['Main']['OriginalFlight'] = flt_original_flight + self.dic_to_flight(dicp, flt_path) + + return False + + def get_paused(self): + hr = self.dll.RequestSystemState( + self.hSimConnect, + self.dll.EventID.EVENT_SIM_PAUSED, + b"Sim" + ) + + def dic_to_flight(self, dic, fpath): + with open(fpath, "w") as tempfile: + for root in dic: + tempfile.write("\n[%s]\n" % root) + for member in dic[root]: + tempfile.write("%s=%s\n" % (member, dic[root][member])) + + def flight_to_dic(self, fpath): + while not os.path.isfile(fpath): + pass + time.sleep(0.5) + dic = {} + index = "" + with open(fpath, "r") as tempfile: + for line in tempfile.readlines(): + if line[0] == '[': + index = line[1:-2] + dic[index] = {} + else: + if index != "" and line != '\n': + temp = line.split("=") + dic[index][temp[0]] = temp[1].strip() + return dic + + def sendText(self, text, timeSeconds=5, TEXT_TYPE=SIMCONNECT_TEXT_TYPE.SIMCONNECT_TEXT_TYPE_PRINT_WHITE): + pyarr = bytearray(text.encode()) + dataarray = (ctypes.c_char * len(pyarr))(*pyarr) + pObjData = cast(dataarray, c_void_p) + self.dll.Text( + self.hSimConnect, + TEXT_TYPE, + timeSeconds, + 0, + sizeof(ctypes.c_double) * len(pyarr), + pObjData + ) + + def createSimulatedObject(self, name, lat, lon, rqst, hdg=0, gnd=1, alt=0, pitch=0, bank=0, speed=0): + simInitPos = SIMCONNECT_DATA_INITPOSITION() + simInitPos.Altitude = alt + simInitPos.Latitude = lat + simInitPos.Longitude = lon + simInitPos.Pitch = pitch + simInitPos.Bank = bank + simInitPos.Heading = hdg + simInitPos.OnGround = gnd + simInitPos.Airspeed = speed + self.dll.AICreateSimulatedObject( + self.hSimConnect, + name.encode(), + simInitPos, + rqst.value + ) diff --git a/templates/skills/msfs2020_control/dependencies/SimConnect/__init__.py b/templates/skills/msfs2020_control/dependencies/SimConnect/__init__.py new file mode 100644 index 00000000..ea8fd2ac --- /dev/null +++ b/templates/skills/msfs2020_control/dependencies/SimConnect/__init__.py @@ -0,0 +1,17 @@ +from .SimConnect import SimConnect, millis, DWORD +from .RequestList import AircraftRequests, Request +from .EventList import AircraftEvents, Event +from .FacilitiesList import FacilitiesRequests, Facilitie + + +def int_or_str(value): + try: + return int(value) + except TypeError: + return value + + +__version__ = "0.4.26" +VERSION = tuple(map(int_or_str, __version__.split("."))) + +__all__ = ["SimConnect", "Request", "Event", "millis", "DWORD", "AircraftRequests", "AircraftEvents", "FacilitiesRequests"] diff --git a/templates/skills/msfs2020_control/logo.png b/templates/skills/msfs2020_control/logo.png new file mode 100644 index 00000000..53699cde Binary files /dev/null and b/templates/skills/msfs2020_control/logo.png differ diff --git a/templates/skills/msfs2020_control/main.py b/templates/skills/msfs2020_control/main.py new file mode 100644 index 00000000..1f1ece41 --- /dev/null +++ b/templates/skills/msfs2020_control/main.py @@ -0,0 +1,427 @@ +import time +import random +import requests +from typing import TYPE_CHECKING +from SimConnect import * +from api.interface import ( + SettingsConfig, + SkillConfig, + WingmanInitializationError, +) +from api.enums import LogType +from skills.skill_base import Skill + +if TYPE_CHECKING: + from wingmen.open_ai_wingman import OpenAiWingman + + +class Msfs2020Control(Skill): + + def __init__(self, config: SkillConfig, settings: SettingsConfig, wingman: "OpenAiWingman") -> None: + super().__init__(config=config, settings=settings, wingman=wingman) + self.already_initialized_simconnect = False + self.loaded = False + self.sm = None # Needs to be set once MSFS2020 is actually connected + self.aq = None # Same + self.ae = None # Same + self.data_monitoring_loop_running = False + self.autostart_data_monitoring_loop_mode = False + self.data_monitoring_backstory = "" + self.min_data_monitoring_seconds = 60 + self.max_data_monitoring_seconds = 360 + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + self.autostart_data_monitoring_loop_mode = self.retrieve_custom_property_value( + "autostart_data_monitoring_loop_mode", errors + ) + self.data_monitoring_backstory = self.retrieve_custom_property_value( + "data_monitoring_backstory", errors + ) + # If not available or not set, use default wingman's backstory + if not self.data_monitoring_backstory or self.data_monitoring_backstory == "" or self.data_monitoring_backstory == " ": + self.data_monitoring_backstory = self.wingman.config.prompts.backstory + + self.min_data_monitoring_seconds = self.retrieve_custom_property_value( + "min_data_monitoring_seconds", errors + ) + + self.max_data_monitoring_seconds = self.retrieve_custom_property_value( + "max_data_monitoring_seconds", errors + ) + + return errors + + def get_tools(self) -> list[tuple[str, dict]]: + return [ + ( + "get_data_from_sim", + { + "type": "function", + "function": { + "name": "get_data_from_sim", + "description": "Retrieve data points from Microsoft Flight Simulator 2020 using the Python SimConnect module.", + "parameters": { + "type": "object", + "properties": { + "data_point": { + "type": "string", + "description": "The data point to retrieve, such as 'PLANE_ALTITUDE', 'PLANE_HEADING_DEGREES_TRUE'.", + }, + }, + "required": ["data_point"], + }, + }, + }, + ), + ( + "set_data_or_perform_action_in_sim", + { + "type": "function", + "function": { + "name": "set_data_or_perform_action_in_sim", + "description": "Set data points or perform actions in Microsoft Flight Simulator 2020 using the Python SimConnect module.", + "parameters": { + "type": "object", + "properties": { + "action": { + "type": "string", + "description": "The action to perform or data point to set, such as 'TOGGLE_MASTER_BATTERY', 'THROTTLE_SET'.", + }, + "argument": { + "type": "number", + "description": "The argument to pass for the action, if any. For actions like 'TOGGLE_MASTER_BATTERY', no argument is needed. For 'THROTTLE_SET', pass the throttle value.", + }, + }, + "required": ["action"], + }, + }, + }, + ), + ( + "start_or_activate_data_monitoring_loop", + { + "type": "function", + "function": { + "name": "start_or_activate_data_monitoring_loop", + "description": "Begin data monitoring loop, which will check certain data points at designated intervals. May be referred to as tour guide mode.", + }, + }, + ), + ( + "end_or_stop_data_monitoring_loop", + { + "type": "function", + "function": { + "name": "end_or_stop_data_monitoring_loop", + "description": "End or stop data monitoring loop, to stop automatically checking data points at designated intervals. May be referred to as tour guide mode.", + }, + }, + ), + ( + "get_information_about_current_location", + { + "type": "function", + "function": { + "name": "get_information_about_current_location", + "description": "Used to provide more detailed information if the user asks a general question like 'where are we?', 'what city are we flying over?', or 'what country is down there?'", + }, + }, + ), + ] + + # Using sample methods found here; allow AI to determine the appropriate variables and arguments, if any: + # https://pypi.org/project/SimConnect/ + async def execute_tool(self, tool_name: str, parameters: dict[str, any]) -> tuple[str, str]: + function_response = "Error in execution. Can you please try your command again?" + instant_response = "" + + if self.settings.debug_mode: + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing {tool_name} function with parameters: {parameters}", + color=LogType.INFO, + ) + + if tool_name == "get_data_from_sim": + data_point = parameters.get("data_point") + value = self.aq.get(data_point) + function_response = f"{data_point} value is: {value}" + + elif tool_name == "set_data_or_perform_action_in_sim": + action = parameters.get("action") + argument = parameters.get("argument", None) + + try: + if argument is not None: + self.aq.set(action, argument) + else: + event_to_trigger = self.ae.find(action) + event_to_trigger() + except: + if self.settings.debug_mode: + await self.printr.print_async( + f"Tried to perform action {action} with argument {argument} using aq.set, now going to try ae.event_to_trigger.", + color=LogType.INFO, + ) + + try: + if argument is not None: + event_to_trigger = self.ae.find(action) + event_to_trigger(argument) + except: + if self.settings.debug_mode: + await self.print_execution_time() + await self.printr.print_async( + f"Neither aq.set nor ae.event_to_trigger worked with {action} and {argument}. Command failed.", + color=LogType.INFO, + ) + return function_response, instant_response + + function_response = f"Action '{action}' executed with argument '{argument}'" + + elif tool_name == "start_or_activate_data_monitoring_loop": + if self.data_monitoring_loop_running: + function_response = "Data monitoring loop is already running." + return function_response, instant_response + + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing start_or_activate_data_monitoring_loop", + color=LogType.INFO, + ) + + if not self.already_initialized_simconnect: + function_response = "Cannot start data monitoring / tour guide mode because simconnect is not connected yet. Check to make sure the game is running." + return function_response, instant_response + + if not self.data_monitoring_loop_running: + await self.initialize_data_monitoring_loop() + + if self.settings.debug_mode: + await self.print_execution_time() + + function_response = "Started data monitoring loop/tour guide mode." + + elif tool_name == "end_or_stop_data_monitoring_loop": + self.start_execution_benchmark() + await self.printr.print_async( + f"Executing end_or_stop_data_monitoring_loop", + color=LogType.INFO, + ) + + await self.stop_data_monitoring_loop() + + if self.settings.debug_mode: + await self.print_execution_time() + + function_response = "Closed data monitoring / tour guide mode." + + elif tool_name == "get_information_about_current_location": + place_info = await self.convert_lat_long_data_into_place_data() + + if self.settings.debug_mode: + await self.print_execution_time() + + if place_info: + on_ground = self.aq.get("SIM_ON_GROUND") + on_ground_statement = "The plane is currently in the air." + if on_ground == False: + on_ground_statement = "The plane is currently on the ground." + function_response = f"{on_ground_statement} Detailed information regarding the location we are currently at or flying over: {place_info}" + else: + function_response = "Unable to get more detailed information regarding the place based on the current latitude and longitude." + + if self.settings.debug_mode: + await self.print_execution_time() + await self.printr.print_async( + f"{function_response}", + color=LogType.INFO, + ) + + return function_response, instant_response + + + # Search for MSFS2020 sim running and then connect + async def start_simconnect(self): + while self.loaded and not self.already_initialized_simconnect: + try: + if self.settings.debug_mode: + await self.printr.print_async( + f"Attempting to find MSFS2020....", + color=LogType.INFO, + ) + self.sm = SimConnect() + self.aq = AircraftRequests(self.sm, _time=2000) + self.ae = AircraftEvents(self.sm) + self.already_initialized_simconnect = True + if self.settings.debug_mode: + await self.printr.print_async( + f"Initialized SimConnect with MSFS2020.", + color=LogType.INFO, + ) + if self.autostart_data_monitoring_loop_mode: + await self.initialize_data_monitoring_loop() + except: + # Wait 30 seconds between connect attempts + time.sleep(30) + + async def initialize_data_monitoring_loop(self): + if self.data_monitoring_loop_running: + return + + if self.settings.debug_mode: + await self.printr.print_async( + "Starting data monitoring loop", + color=LogType.INFO, + ) + + self.threaded_execution(self.start_data_monitoring_loop) + + async def start_data_monitoring_loop(self): + if not self.data_monitoring_loop_running: + self.data_monitoring_loop_running = True + + while self.data_monitoring_loop_running: + random_time = random.choice(range(self.min_data_monitoring_seconds, self.max_data_monitoring_seconds, 15)) #Gets random number from min to max in increments of 15 + if self.settings.debug_mode: + await self.printr.print_async( + "Attempting looped monitoring check.", + color=LogType.INFO, + ) + try: + place_data = await self.convert_lat_long_data_into_place_data() + if place_data: + await self.initiate_llm_call_with_plane_data(place_data) + except Exception as e: + if self.settings.debug_mode: + await self.printr.print_async( + f"Something failed in looped monitoring check. Could not return data or send to llm: {e}.", + color=LogType.INFO, + ) + time.sleep(random_time) + + async def stop_data_monitoring_loop(self): + self.data_monitoring_loop_running = False + + if self.settings.debug_mode: + await self.printr.print_async( + "Stopping data monitoring loop", + color=LogType.INFO, + ) + + async def convert_lat_long_data_into_place_data(self, latitude=None, longitude=None, altitude=None): + if not self.already_initialized_simconnect or not self.sm or not self.aq: + return None + ground_altitude = 0 + # If all parameters are already provided, just run the request + if latitude and longitude and altitude: + ground_altitude = self.aq.get("GROUND_ALTITUDE") + # If only latitude and longitude, grab altitude so a reasonable "zoom level" can be set for place data + elif latitude and longitude: + altitude = self.aq.get("PLANE_ALTITUDE") + ground_altitude = self.aq.get("GROUND_ALTITUDE") + # Otherwise grab all data components + else: + latitude = self.aq.get("PLANE_LATITUDE") + longitude = self.aq.get("PLANE_LONGITUDE") + altitude = self.aq.get("PLANE_ALTITUDE") + ground_altitude = self.aq.get("GROUND_ALTITUDE") + + # If no values still, for instance, when connection is made but no data yet, return None + if not latitude or not longitude or not altitude or not ground_altitude: + return None + + # Set zoom level based on altitude, see zoom documentation at https://nominatim.org/release-docs/develop/api/Reverse/ + zoom = 18 + distance_above_ground = altitude - ground_altitude + if distance_above_ground <= 1500: + zoom = 18 + elif distance_above_ground <= 3500: + zoom = 17 + elif distance_above_ground <= 5000: + zoom = 15 + elif distance_above_ground <= 10000: + zoom = 13 + elif distance_above_ground <= 20000: + zoom = 10 + else: + zoom = 8 + + if self.settings.debug_mode: + await self.printr.print_async( + f"Attempting query of OpenStreetMap Nominatum with parameters: {latitude}, {longitude}, {altitude}, zoom level: {zoom}", + color=LogType.INFO, + ) + + # Request data from openstreetmap nominatum api for reverse geocoding + url = f"https://nominatim.openstreetmap.org/reverse?format=jsonv2&lat={latitude}&lon={longitude}&zoom={zoom}&accept-language=en&extratags=1" + headers = { + 'User-Agent': f'msfs2020control_skill wingmanai {self.wingman.name}' + } + response = requests.get(url, headers=headers) + if response.status_code == 200: + return response.json() + else: + if self.settings.debug_mode: + await self.printr.print_async(f"API request failed to {url}, status code: {response.status_code}.", color=LogType.INFO) + return None + + # Get LLM to provide a verbal response to the user, without requiring the user to initiate a communication with the LLM + async def initiate_llm_call_with_plane_data(self, data): + on_ground = self.aq.get("SIM_ON_GROUND") + on_ground_statement = "The plane is currently in the air." + if on_ground: + on_ground_statement = "The plane is currently on the ground." + user_content = f"{on_ground_statement} Information about the location: {data}" + messages = [ + { + 'role': 'system', + 'content': f""" + {self.data_monitoring_backstory} + """, + }, + { + 'role': 'user', + 'content': user_content, + }, + ] + if self.settings.debug_mode: + await self.printr.print_async( + f"Attempting llm call with parameters: {self.data_monitoring_backstory}, {user_content}.", + color=LogType.INFO, + ) + completion = await self.llm_call(messages) + response = completion.choices[0].message.content if completion and completion.choices else "" + + if not response: + if self.settings.debug_mode: + await self.printr.print_async( + f"Llm call returned no response.", + color=LogType.INFO, + ) + return + + await self.printr.print_async( + text=f"Data monitoring response: {response}", + color=LogType.INFO, + source_name=self.wingman.name + ) + + self.threaded_execution(self.wingman.play_to_user, response, True) + await self.wingman.add_assistant_message(response) + + async def is_waiting_response_needed(self, tool_name: str) -> bool: + return True + + async def prepare(self) -> None: + """Load the skill by trying to connect to the sim""" + self.loaded = True + self.threaded_execution(self.start_simconnect) + + async def unload(self) -> None: + """Unload the skill.""" + await self.stop_data_monitoring_loop() + self.loaded = False + if self.sm: + self.sm.exit() diff --git a/templates/skills/msfs2020_control/requirements.txt b/templates/skills/msfs2020_control/requirements.txt new file mode 100644 index 00000000..b3b490eb --- /dev/null +++ b/templates/skills/msfs2020_control/requirements.txt @@ -0,0 +1 @@ +SimConnect~=0.4.26 \ No newline at end of file diff --git a/templates/skills/spotify/main.py b/templates/skills/spotify/main.py index 583ceacd..17639270 100644 --- a/templates/skills/spotify/main.py +++ b/templates/skills/spotify/main.py @@ -24,16 +24,23 @@ def __init__( self.data_path = get_writable_dir(path.join("skills", "spotify", "data")) self.spotify: spotipy.Spotify = None self.available_devices = [] + self.secret: str = None + + async def secret_changed(self, secrets: dict[str, any]): + await super().secret_changed(secrets) + + if secrets["spotify_client_secret"] != self.secret: + await self.validate() async def validate(self) -> list[WingmanInitializationError]: errors = await super().validate() - secret = await self.retrieve_secret("spotify_client_secret", errors) + self.secret = await self.retrieve_secret("spotify_client_secret", errors) client_id = self.retrieve_custom_property_value("spotify_client_id", errors) redirect_url = self.retrieve_custom_property_value( "spotify_redirect_url", errors ) - if secret and client_id and redirect_url: + if self.secret and client_id and redirect_url: # now that we have everything, initialize the Spotify client cache_handler = spotipy.cache_handler.CacheFileHandler( cache_path=f"{self.data_path}/.cache" @@ -41,7 +48,7 @@ async def validate(self) -> list[WingmanInitializationError]: self.spotify = spotipy.Spotify( auth_manager=SpotifyOAuth( client_id=client_id, - client_secret=secret, + client_secret=self.secret, redirect_uri=redirect_url, scope=[ "user-library-read", diff --git a/templates/skills/thinking_sound/default_config.yaml b/templates/skills/thinking_sound/default_config.yaml new file mode 100644 index 00000000..a6a9781a --- /dev/null +++ b/templates/skills/thinking_sound/default_config.yaml @@ -0,0 +1,23 @@ +name: ThinkingSound +module: skills.thinking_sound.main +category: general +description: + en: Plays back sounds while waiting on AI response. + de: Spielt Sounds ab, während auf die Antwort der AI gewartet wird. +custom_properties: + - id: audio_config + name: Audio Configuration + hint: Choose your files (random selection on multiple) and volume for playback. Recommended volume is 0.2 - 0.4. + required: true + property_type: audio_files + options: + - label: wait + value: false + - label: multiple + value: true + - label: volume + value: true + value: + files: [] + volume: 0.4 + wait: false diff --git a/templates/skills/thinking_sound/logo.png b/templates/skills/thinking_sound/logo.png new file mode 100644 index 00000000..80f17312 Binary files /dev/null and b/templates/skills/thinking_sound/logo.png differ diff --git a/templates/skills/thinking_sound/main.py b/templates/skills/thinking_sound/main.py new file mode 100644 index 00000000..9317a7bf --- /dev/null +++ b/templates/skills/thinking_sound/main.py @@ -0,0 +1,85 @@ +import asyncio +from api.enums import LogType +from api.interface import WingmanInitializationError, AudioFileConfig +from skills.skill_base import Skill + +class ThinkingSound(Skill): + def __init__(self, *args, **kwargs) -> None: + super().__init__(*args, **kwargs) + + self.audio_config: AudioFileConfig = None + self.original_volume = None + self.stop_duration = 1 + self.active = False + self.playing = False + + async def validate(self) -> list[WingmanInitializationError]: + errors = await super().validate() + self.audio_config = self.retrieve_custom_property_value("audio_config", errors) + if self.audio_config: + # force no wait for this skill to work + self.audio_config.wait = False + return errors + + async def unload(self) -> None: + await self.stop_playback() + self.active = False + + async def prepare(self) -> None: + self.active = True + + async def on_playback_started(self, wingman_name): + # placeholder for future implementation + pass + + async def on_playback_finished(self, wingman_name): + # placeholder for future implementation + pass + + async def on_add_user_message(self, message: str) -> None: + await self.wingman.audio_library.stop_playback(self.audio_config, 0) + + if self.wingman.settings.debug_mode: + await self.printr.print_async( + "Initiating filling sound.", + color=LogType.INFO, + server_only=False, + ) + + self.threaded_execution(self.start_playback) + self.threaded_execution(self.auto_stop_playback) + + async def start_playback(self): + if not self.audio_config: + await self.printr.print_async( + f"No filling soaund configured for {self.wingman.name}'s thinking_sound skill.", + color=LogType.WARNING, + server_only=False, + ) + return + + if not self.playing: + self.playing = True + await self.wingman.audio_library.start_playback( + self.audio_config, self.wingman.config.sound.volume + ) + + async def stop_playback(self): + await self.wingman.audio_library.stop_playback( + self.audio_config, self.stop_duration + ) + + async def auto_stop_playback(self): + # Wait for main playback to start + while not self.wingman.audio_player.is_playing and self.active: + await asyncio.sleep(0.1) + + if self.wingman.settings.debug_mode: + await self.printr.print_async( + "Stopping filling sound softly.", + color=LogType.INFO, + server_only=False, + ) + + await self.wingman.audio_library.stop_playback(self.audio_config, self.stop_duration) + self.playing = False diff --git a/templates/skills/vision_ai/main.py b/templates/skills/vision_ai/main.py index b9f19bc7..fac889a3 100644 --- a/templates/skills/vision_ai/main.py +++ b/templates/skills/vision_ai/main.py @@ -91,7 +91,7 @@ async def execute_tool( source=LogSource.WINGMAN, source_name=self.wingman.name, skill_name=self.name, - additional_data={"image": png_base64}, + additional_data={"image_base64": png_base64}, ) question = parameters.get("question", "What's in this image?") diff --git a/wingman_core.py b/wingman_core.py index 4f0e3741..d8b495b8 100644 --- a/wingman_core.py +++ b/wingman_core.py @@ -1,4 +1,5 @@ import asyncio +from concurrent.futures import ThreadPoolExecutor import os import re import threading @@ -19,6 +20,7 @@ ) from api.interface import ( AudioDevice, + AudioFile, AzureSttConfig, ConfigWithDirInfo, ElevenlabsModel, @@ -37,6 +39,7 @@ from services.settings_service import SettingsService from services.config_service import ConfigService from services.audio_player import AudioPlayer +from services.audio_library import AudioLibrary from services.audio_recorder import RECORDING_PATH, AudioRecorder from services.config_manager import ConfigManager from services.printr import Printr @@ -173,6 +176,12 @@ def __init__( endpoint=self.open_logs_directory, tags=tags, ) + self.router.add_api_route( + methods=["POST"], + path="/open-filemanager/audio-library", + endpoint=self.open_audio_library_directory, + tags=tags, + ) self.router.add_api_route( methods=["GET"], path="/models/openrouter", @@ -187,6 +196,20 @@ def __init__( endpoint=self.get_groq_models, tags=tags, ) + self.router.add_api_route( + methods=["GET"], + path="/models/cerebras", + response_model=list, + endpoint=self.get_cerebras_models, + tags=tags, + ) + self.router.add_api_route( + methods=["GET"], + path="/models/openai", + response_model=list, + endpoint=self.get_openai_models, + tags=tags, + ) self.router.add_api_route( methods=["GET"], path="/models/elevenlabs", @@ -194,6 +217,39 @@ def __init__( endpoint=self.get_elevenlabs_models, tags=tags, ) + # TODO: Refactor - move these to a new AudioLibrary service: + self.router.add_api_route( + methods=["GET"], + path="/audio-library", + response_model=list[AudioFile], + endpoint=self.get_audio_library, + tags=tags, + ) + self.router.add_api_route( + methods=["POST"], + path="/audio-library/play", + endpoint=self.play_from_audio_library, + tags=tags, + ) + self.router.add_api_route( + methods=["POST"], + path="/elevenlabs/generate-sfx", + endpoint=self.generate_sfx_elevenlabs, + tags=tags, + ) + self.router.add_api_route( + methods=["GET"], + path="/elevenlabs/subscription-data", + endpoint=self.get_elevenlabs_subscription_data, + response_model=dict, + tags=tags, + ) + self.router.add_api_route( + methods=["POST"], + path="/shutdown", + endpoint=self.shutdown, + tags=tags, + ) self.config_manager = config_manager self.config_service = ConfigService(config_manager=config_manager) @@ -209,6 +265,7 @@ def __init__( on_playback_started=self.on_playback_started, on_playback_finished=self.on_playback_finished, ) + self.audio_library = AudioLibrary() self.tower: Tower = None @@ -276,6 +333,7 @@ async def initialize_tower(self, config_dir_info: ConfigWithDirInfo): self.tower = Tower( config=config_dir_info.config, audio_player=self.audio_player, + audio_library=self.audio_library, whispercpp=self.whispercpp, xvasynth=self.xvasynth, ) @@ -290,8 +348,6 @@ async def initialize_tower(self, config_dir_info: ConfigWithDirInfo): async def unload_tower(self): if self.tower: for wingman in self.tower.wingmen: - for skill in wingman.skills: - await skill.unload() await wingman.unload() self.tower = None self.config_service.set_tower(None) @@ -802,16 +858,22 @@ def open_config_directory(self, config_name: str): def open_logs_directory(self): show_in_file_manager(get_writable_dir("logs")) + # POST /open-filemanager/audio-library + def open_audio_library_directory(self): + show_in_file_manager(get_writable_dir("audio_library")) + + # GET /models/openrouter async def get_openrouter_models(self): response = requests.get(url=f"https://openrouter.ai/api/v1/models", timeout=10) response.raise_for_status() content = response.json() return content.get("data", []) + # GET /models/groq async def get_groq_models(self): groq_api_key = await self.secret_keeper.retrieve(key="groq", requester="Groq") response = requests.get( - url=f"https://api.groq.com/openai/v1/models", + url="https://api.groq.com/openai/v1/models", timeout=10, headers={ "Authorization": f"Bearer {groq_api_key}", @@ -822,20 +884,139 @@ async def get_groq_models(self): content = response.json() return content.get("data", []) - async def get_elevenlabs_models(self): + async def get_cerebras_models(self): + cerebras_api_key = await self.secret_keeper.retrieve( + key="cerebras", requester="Cerebras" + ) + response = requests.get( + url="https://api.cerebras.ai/v1/models", + timeout=10, + headers={ + "Authorization": f"Bearer {cerebras_api_key}", + "Content-Type": "application/json", + }, + ) + response.raise_for_status() + content = response.json() + return content.get("data", []) + + async def get_openai_models(self): + openai_api_key = await self.secret_keeper.retrieve( + key="openai", requester="OpenAI" + ) + response = requests.get( + url="https://api.openai.com/v1/models", + timeout=10, + headers={ + "Authorization": f"Bearer {openai_api_key}", + "Content-Type": "application/json", + }, + ) + response.raise_for_status() + content = response.json() + return content.get("data", []) + + # GET /models/elevenlabs + async def get_elevenlabs_models(self) -> list[ElevenlabsModel]: + elevenlabs_api_key = await self.secret_keeper.retrieve( + key="elevenlabs", requester="Elevenlabs" + ) + elevenlabs = ElevenLabs(api_key=elevenlabs_api_key, wingman_name="") + try: + models = elevenlabs.get_available_models() + + convert = lambda model: ElevenlabsModel( + name=model.name, + model_id=model.modelID, + description=model.description, + max_characters=model.maxCharacters, + cost_factor=model.costFactor, + supported_languages=model.supportedLanguages, + metadata=model.metadata, + ) + result = [convert(model) for model in models] + return result + except ValueError as e: + self.printr.toast_error(f"Elevenlabs: \n{str(e)}") + return [] + + # GET /audio-library + async def get_audio_library(self): + return self.audio_library.get_audio_files() + + # POST /audio-library/play + async def play_from_audio_library( + self, name: str, path: str, volume: Optional[float] = 1.0 + ): + await self.audio_library.start_playback( + audio_file=AudioFile(name=name, path=path), volume_modifier=volume + ) + + # POST /elevenlabs/generate-sfx + async def generate_sfx_elevenlabs( + self, + prompt: str, + path: str, + name: str, + duration_seconds: Optional[float] = None, + prompt_influence: Optional[float] = None, + ): + elevenlabs_api_key = await self.secret_keeper.retrieve( + key="elevenlabs", requester="Elevenlabs" + ) + elevenlabs = ElevenLabs(api_key=elevenlabs_api_key, wingman_name="") + try: + audio_bytes = await elevenlabs.generate_sound_effect( + prompt=prompt, + duration_seconds=duration_seconds, + prompt_influence=prompt_influence, + ) + + if not name.endswith(".mp3"): + name += ".mp3" + + directory = get_writable_dir(os.path.join("audio_library", path)) + + if os.path.exists(os.path.join(directory, name)): + + def get_unique_filename(directory: str, filename: str) -> str: + base, extension = os.path.splitext(filename) + counter = 1 + while os.path.exists(os.path.join(directory, filename)): + filename = f"{base}-{counter}{extension}" + counter += 1 + return filename + + name = get_unique_filename(directory, name) + + with open(os.path.join(directory, name), "wb") as f: + f.write(audio_bytes) + + await self.audio_library.start_playback( + audio_file=AudioFile(name=name, path=path) + ) + except ValueError as e: + self.printr.toast_error(f"Elevenlabs: \n{str(e)}") + return False + + return True + + # GET /elevenlabs/subscription-data + async def get_elevenlabs_subscription_data(self): elevenlabs_api_key = await self.secret_keeper.retrieve( key="elevenlabs", requester="Elevenlabs" ) elevenlabs = ElevenLabs(api_key=elevenlabs_api_key, wingman_name="") - models = elevenlabs.get_available_models() - convert = lambda model: ElevenlabsModel( - name=model.name, - model_id=model.modelID, - description=model.description, - max_characters=model.maxCharacters, - cost_factor=model.costFactor, - supported_languages=model.supportedLanguages, - metadata=model.metadata, - ) - result = [convert(model) for model in models] - return result + try: + # Run the synchronous method in a separate thread + loop = asyncio.get_running_loop() + with ThreadPoolExecutor() as pool: + data = await loop.run_in_executor( + pool, elevenlabs.get_subscription_data + ) + return data + except ValueError as e: + self.printr.toast_error(f"Elevenlabs: \n{str(e)}") + + async def shutdown(self): + await self.unload_tower() diff --git a/wingmen/open_ai_wingman.py b/wingmen/open_ai_wingman.py index a5f42ceb..224f7907 100644 --- a/wingmen/open_ai_wingman.py +++ b/wingmen/open_ai_wingman.py @@ -7,7 +7,6 @@ from api.interface import ( SettingsConfig, SoundConfig, - WingmanConfig, WingmanInitializationError, ) from api.enums import ( @@ -25,9 +24,6 @@ from providers.google import GoogleGenAI from providers.open_ai import OpenAi, OpenAiAzure from providers.wingman_pro import WingmanPro -from providers.xvasynth import XVASynth -from providers.whispercpp import Whispercpp -from services.audio_player import AudioPlayer from services.markdown import cleanup_text from services.printr import Printr from skills.skill_base import Skill @@ -48,23 +44,8 @@ class OpenAiWingman(Wingman): "conversation": ConversationProvider.AZURE, } - def __init__( - self, - name: str, - config: WingmanConfig, - settings: SettingsConfig, - audio_player: AudioPlayer, - whispercpp: Whispercpp, - xvasynth: XVASynth, - ): - super().__init__( - name=name, - config=config, - audio_player=audio_player, - settings=settings, - whispercpp=whispercpp, - xvasynth=xvasynth, - ) + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) self.edge_tts = Edge() @@ -72,12 +53,14 @@ def __init__( self.openai: OpenAi = None self.mistral: OpenAi = None self.groq: OpenAi = None + self.cerebras: OpenAi = None self.openrouter: OpenAi = None self.local_llm: OpenAi = None self.openai_azure: OpenAiAzure = None self.elevenlabs: ElevenLabs = None self.wingman_pro: WingmanPro = None self.google: GoogleGenAI = None + self.perplexity: OpenAi = None # tool queue self.pending_tool_calls = [] @@ -107,6 +90,9 @@ async def validate(self): if self.uses_provider("groq"): await self.validate_and_set_groq(errors) + if self.uses_provider("cerebras"): + await self.validate_and_set_cerebras(errors) + if self.uses_provider("google"): await self.validate_and_set_google(errors) @@ -125,6 +111,9 @@ async def validate(self): if self.uses_provider("wingman_pro"): await self.validate_and_set_wingman_pro() + if self.uses_provider("perplexity"): + await self.validate_and_set_perplexity(errors) + return errors def uses_provider(self, provider_type: str): @@ -151,6 +140,13 @@ def uses_provider(self, provider_type: str): == ConversationProvider.GROQ, ] ) + elif provider_type == "cerebras": + return any( + [ + self.config.features.conversation_provider + == ConversationProvider.CEREBRAS, + ] + ) elif provider_type == "google": return any( [ @@ -199,6 +195,13 @@ def uses_provider(self, provider_type: str): self.config.features.stt_provider == SttProvider.WINGMAN_PRO, ] ) + elif provider_type == "perplexity": + return any( + [ + self.config.features.conversation_provider + == ConversationProvider.PERPLEXITY, + ] + ) return False async def prepare(self): @@ -252,6 +255,15 @@ async def validate_and_set_groq(self, errors: list[WingmanInitializationError]): base_url=self.config.groq.endpoint, ) + async def validate_and_set_cerebras(self, errors: list[WingmanInitializationError]): + api_key = await self.retrieve_secret("cerebras", errors) + if api_key: + # TODO: maybe use their native client (or LangChain) instead of OpenAI(?) + self.cerebras = OpenAi( + api_key=api_key, + base_url=self.config.cerebras.endpoint, + ) + async def validate_and_set_google(self, errors: list[WingmanInitializationError]): api_key = await self.retrieve_secret("google", errors) if api_key: @@ -302,6 +314,14 @@ async def validate_and_set_wingman_pro(self): wingman_name=self.name, settings=self.settings.wingman_pro ) + async def validate_and_set_perplexity(self, errors: list[WingmanInitializationError]): + api_key = await self.retrieve_secret("perplexity", errors) + if api_key: + self.perplexity = OpenAi( + api_key=api_key, + base_url=self.config.perplexity.endpoint, + ) + # overrides the base class method async def update_settings(self, settings: SettingsConfig): """Update the settings of the Wingman. This method should always be called when the user Settings have changed.""" @@ -861,7 +881,7 @@ async def actual_llm_call(self, messages, tools: list[dict] = None): completion = self.openai.ask( messages=messages, tools=tools, - model=self.config.openai.conversation_model.value, + model=self.config.openai.conversation_model, ) elif self.config.features.conversation_provider == ConversationProvider.MISTRAL: completion = self.mistral.ask( @@ -875,6 +895,14 @@ async def actual_llm_call(self, messages, tools: list[dict] = None): tools=tools, model=self.config.groq.conversation_model, ) + elif ( + self.config.features.conversation_provider == ConversationProvider.CEREBRAS + ): + completion = self.cerebras.ask( + messages=messages, + tools=tools, + model=self.config.cerebras.conversation_model, + ) elif self.config.features.conversation_provider == ConversationProvider.GOOGLE: completion = self.google.ask( messages=messages, @@ -907,6 +935,14 @@ async def actual_llm_call(self, messages, tools: list[dict] = None): deployment=self.config.wingman_pro.conversation_deployment, tools=tools, ) + elif ( + self.config.features.conversation_provider == ConversationProvider.PERPLEXITY + ): + completion = self.perplexity.ask( + messages=messages, + tools=tools, + model=self.config.perplexity.conversation_model.value, + ) return completion diff --git a/wingmen/wingman.py b/wingmen/wingman.py index cf05f6cf..75685d7b 100644 --- a/wingmen/wingman.py +++ b/wingmen/wingman.py @@ -21,6 +21,7 @@ from services.module_manager import ModuleManager from services.secret_keeper import SecretKeeper from services.printr import Printr +from services.audio_library import AudioLibrary from skills.skill_base import Skill @@ -39,6 +40,7 @@ def __init__( config: WingmanConfig, settings: SettingsConfig, audio_player: AudioPlayer, + audio_library: AudioLibrary, whispercpp: Whispercpp, xvasynth: XVASynth, ): @@ -57,6 +59,7 @@ def __init__( self.secret_keeper = SecretKeeper() """A service that allows you to store and retrieve secrets like API keys. It can prompt the user for secrets if necessary.""" + self.secret_keeper.secret_events.subscribe("secrets_saved", self.validate) self.name = name """The name of the wingman. This is the key you gave it in the config, e.g. "atc".""" @@ -64,6 +67,9 @@ def __init__( self.audio_player = audio_player """A service that allows you to play audio files and add sound effects to them.""" + self.audio_library = audio_library + """A service that allows you to play and manage audio files from the audio library.""" + self.execution_start: None | float = None """Used for benchmarking executon times. The timer is (re-)started whenever the process function starts.""" @@ -142,6 +148,7 @@ async def prepare(self): async def unload(self): """This method is called when the Wingman is unloaded by Tower. You can override it if you need to clean up resources.""" + await self.unload_skills() async def unload_skills(self): """Call this to trigger unload for all skills.""" @@ -424,7 +431,7 @@ async def _execute_command(self, command: CommandConfig) -> str: if not self.settings.debug_mode: # in debug mode we already printed the separate execution times await self.print_execution_time() - self.execute_action(command) + await self.execute_action(command) if len(command.actions or []) == 0: await printr.print_async( @@ -437,7 +444,7 @@ async def _execute_command(self, command: CommandConfig) -> str: return self._select_command_response(command) or "Ok" - def execute_action(self, command: CommandConfig): + async def execute_action(self, command: CommandConfig): """Executes the actions defined in the command (in order). Args: @@ -516,14 +523,22 @@ def execute_action(self, command: CommandConfig): if action.wait: time.sleep(action.wait) + if action.audio: + await self.audio_library.start_playback( + action.audio, self.config.sound.volume + ) + def threaded_execution(self, function, *args) -> threading.Thread: """Execute a function in a separate thread.""" def start_thread(function, *args): - new_loop = asyncio.new_event_loop() - asyncio.set_event_loop(new_loop) - new_loop.run_until_complete(function(*args)) - new_loop.close() + if asyncio.iscoroutinefunction(function): + new_loop = asyncio.new_event_loop() + asyncio.set_event_loop(new_loop) + new_loop.run_until_complete(function(*args)) + new_loop.close() + else: + function(*args) thread = threading.Thread(target=start_thread, args=(function, *args)) thread.start()