

Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà.

# AWS Glue PySpark trasforma il riferimento
<a name="aws-glue-programming-python-transforms"></a>

AWS Glue fornisce le seguenti trasformazioni integrate che è possibile utilizzare nelle operazioni PySpark ETL. I dati passano da una trasformazione all'altra in una struttura di dati chiamata a *DynamicFrame*, che è un'estensione di Apache Spark SQL. `DataFrame` `DynamicFrame` contiene i tuoi dati e il suo schema di riferimento per elaborare i dati. 

La maggior parte di queste trasformazioni esiste anche come metodi della classe `DynamicFrame`. [Per ulteriori informazioni, consulta DynamicFrame trasformazioni.](aws-glue-api-crawler-pyspark-extensions-dynamic-frame.md#aws-glue-api-crawler-pyspark-extensions-dynamic-frame-_transforms)
+ [GlueTransform classe base](aws-glue-api-crawler-pyspark-transforms-GlueTransform.md)
+ [ApplyMapping classe](aws-glue-api-crawler-pyspark-transforms-ApplyMapping.md)
+ [DropFields classe](aws-glue-api-crawler-pyspark-transforms-DropFields.md)
+ [DropNullFields classe](aws-glue-api-crawler-pyspark-transforms-DropNullFields.md)
+ [ErrorsAsDynamicFrame classe](aws-glue-api-crawler-pyspark-transforms-ErrorsAsDynamicFrame.md)
+ [EvaluateDataQuality classe](aws-glue-api-crawler-pyspark-transforms-EvaluateDataQuality.md)
+ [FillMissingValues classe](aws-glue-api-crawler-pyspark-transforms-fillmissingvalues.md)
+ [Classe filtro](aws-glue-api-crawler-pyspark-transforms-filter.md)
+ [FindIncrementalMatches classe](aws-glue-api-crawler-pyspark-transforms-findincrementalmatches.md)
+ [FindMatches classe](aws-glue-api-crawler-pyspark-transforms-findmatches.md)
+ [FlatMap classe](aws-glue-api-crawler-pyspark-transforms-flat-map.md)
+ [Classe join](aws-glue-api-crawler-pyspark-transforms-join.md)
+ [Classe mappatura](aws-glue-api-crawler-pyspark-transforms-map.md)
+ [MapToCollection classe](aws-glue-api-crawler-pyspark-transforms-MapToCollection.md)
+ [mergeDynamicFrame](aws-glue-api-crawler-pyspark-extensions-dynamic-frame.md#aws-glue-api-crawler-pyspark-extensions-dynamic-frame-merge)
+ [Classe relazionalizzazione](aws-glue-api-crawler-pyspark-transforms-Relationalize.md)
+ [RenameField classe](aws-glue-api-crawler-pyspark-transforms-RenameField.md)
+ [ResolveChoice classe](aws-glue-api-crawler-pyspark-transforms-ResolveChoice.md)
+ [SelectFields classe](aws-glue-api-crawler-pyspark-transforms-SelectFields.md)
+ [SelectFromCollection classe](aws-glue-api-crawler-pyspark-transforms-SelectFromCollection.md)
+ [Classe Simplify\$1ddb\$1json](aws-glue-api-crawler-pyspark-transforms-simplify-ddb-json.md)
+ [Classe Spigot](aws-glue-api-crawler-pyspark-transforms-spigot.md)
+ [SplitFields classe](aws-glue-api-crawler-pyspark-transforms-SplitFields.md)
+ [SplitRows classe](aws-glue-api-crawler-pyspark-transforms-SplitRows.md)
+ [Classe unbox](aws-glue-api-crawler-pyspark-transforms-Unbox.md)
+ [UnnestFrame classe](aws-glue-api-crawler-pyspark-transforms-UnnestFrame.md)

## Trasformazioni di integrazione dei dati
<a name="aws-glue-programming-python-di-transforms"></a>

 Per AWS Glue 4.0 e versioni successive, crea o aggiorna gli argomenti del lavoro con`key: --enable-glue-di-transforms, value: true`. 

 Esempio di script di processo: 

```
from pyspark.context import SparkContext
        
from awsgluedi.transforms import *
sc = SparkContext()

input_df = spark.createDataFrame(
    [(5,), (0,), (-1,), (2,), (None,)],
    ["source_column"],
)

try:
    df_output = math_functions.IsEven.apply(
        data_frame=input_df,
        spark_context=sc,
        source_column="source_column",
        target_column="target_column",
        value=None,
        true_string="Even",
        false_string="Not even",
    )
    df_output.show()   
except:
    print("Unexpected Error happened ")
    raise
```

 Sessioni di esempio con notebook 

```
%idle_timeout 2880
%glue_version 4.0
%worker_type G.1X
%number_of_workers 5
%region eu-west-1
```

```
%%configure
{
    "--enable-glue-di-transforms": "true"
}
```

```
from pyspark.context import SparkContext
from awsgluedi.transforms import *

sc = SparkContext()

input_df = spark.createDataFrame(
    [(5,), (0,), (-1,), (2,), (None,)],
    ["source_column"],
)

try:
    df_output = math_functions.IsEven.apply(
        data_frame=input_df,
        spark_context=sc,
        source_column="source_column",
        target_column="target_column",
        value=None,
        true_string="Even",
        false_string="Not even",
    )
    df_output.show()    
except:
    print("Unexpected Error happened ")
    raise
```

 Sessioni di esempio con AWS CLI 

```
aws glue create-session --default-arguments "--enable-glue-di-transforms=true"
```

 Trasformazioni di integrazione dei dati: 
+  [FlagDuplicatesInColumn classe](aws-glue-api-pyspark-transforms-FlagDuplicatesInColumn.md) 
+  [FormatPhoneNumber classe](aws-glue-api-pyspark-transforms-FormatPhoneNumber.md) 
+  [FormatCase classe](aws-glue-api-pyspark-transforms-FormatCase.md) 
+  [FillWithMode classe](aws-glue-api-pyspark-transforms-FillWithMode.md) 
+  [FlagDuplicateRows classe](aws-glue-api-pyspark-transforms-FlagDuplicateRows.md) 
+  [RemoveDuplicates classe](aws-glue-api-pyspark-transforms-RemoveDuplicates.md) 
+  [MonthName classe](aws-glue-api-pyspark-transforms-MonthName.md) 
+  [IsEven classe](aws-glue-api-pyspark-transforms-IsEven.md) 
+  [CryptographicHash classe](aws-glue-api-pyspark-transforms-CryptographicHash.md) 
+  [Classe Decrypt](aws-glue-api-pyspark-transforms-Decrypt.md) 
+  [Classe Encrypt](aws-glue-api-pyspark-transforms-Encrypt.md) 
+  [IntToIp classe](aws-glue-api-pyspark-transforms-IntToIp.md) 
+  [IpToInt classe](aws-glue-api-pyspark-transforms-IpToInt.md) 

### Maven: creazione di bundle per il plug-in con le applicazioni Spark
<a name="aws-glue-programming-python-di-transforms-maven"></a>

 È possibile raggruppare le dipendenze delle trasformazioni con le applicazioni Spark e le distribuzioni Spark (versione 3.3) aggiungendo la dipendenza del plug-in nel file Maven `pom.xml` mentre si sviluppano le applicazioni Spark in locale. 

```
<repositories>
   ...
    <repository>
        <id>aws-glue-etl-artifacts</id>
        <url>https://aws-glue-etl-artifacts.s3.amazonaws.com/release/ </url>
    </repository>
</repositories>
...
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>AWSGlueTransforms</artifactId>
    <version>4.0.0</version>
</dependency>
```

 In alternativa, puoi scaricare i file binari direttamente dagli artefatti di AWS Glue Maven e includerli nella tua applicazione Spark come segue. 

```
#!/bin/bash
sudo wget -v https://aws-glue-etl-artifacts.s3.amazonaws.com/release/com/amazonaws/AWSGlueTransforms/4.0.0/AWSGlueTransforms-4.0.0.jar -P /usr/lib/spark/jars/
```