Skip to content

Commit

Permalink
[MINOR][DOCS] fix typo for docs,log message and comments
Browse files Browse the repository at this point in the history
### What changes were proposed in this pull request?
Fix typo for docs, log messages and comments

### Why are the changes needed?
typo fix to increase readability

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
manual test has been performed to test the updated

Closes apache#29443 from brandonJY/spell-fix-doc.

Authored-by: Brandon Jiang <[email protected]>
Signed-off-by: Takeshi Yamamuro <[email protected]>
  • Loading branch information
brandonJY authored and maropu committed Aug 21, 2020
1 parent 3dca81e commit 1450b5e
Show file tree
Hide file tree
Showing 12 changed files with 12 additions and 12 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ public boolean sharedByteBufAllocators() {
}

/**
* If enabled then off-heap byte buffers will be prefered for the shared ByteBuf allocators.
* If enabled then off-heap byte buffers will be preferred for the shared ByteBuf allocators.
*/
public boolean preferDirectBufsForSharedByteBufAllocators() {
return conf.getBoolean("spark.network.io.preferDirectBufs", true);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ public interface DriverPlugin {
* initialization.
* <p>
* It's recommended that plugins be careful about what operations are performed in this call,
* preferrably performing expensive operations in a separate thread, or postponing them until
* preferably performing expensive operations in a separate thread, or postponing them until
* the application has fully started.
*
* @param sc The SparkContext loading the plugin.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ import org.apache.spark.util.Utils.executeAndGetOutput
/**
* The default plugin that is loaded into a Spark application to control how custom
* resources are discovered. This executes the discovery script specified by the user
* and gets the json output back and contructs ResourceInformation objects from that.
* and gets the json output back and constructs ResourceInformation objects from that.
* If the user specifies custom plugins, this is the last one to be executed and
* throws if the resource isn't discovered.
*
Expand Down
2 changes: 1 addition & 1 deletion docs/job-scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,7 @@ later.

In order to synchronize PVM threads with JVM threads, you should set `PYSPARK_PIN_THREAD` environment variable
to `true`. This pinned thread mode allows one PVM thread has one corresponding JVM thread. With this mode,
`pyspark.InheritableThread` is recommanded to use together for a PVM thread to inherit the interitable attributes
`pyspark.InheritableThread` is recommended to use together for a PVM thread to inherit the inheritable attributes
such as local properties in a JVM thread.

Note that `PYSPARK_PIN_THREAD` is currently experimental and not recommended for use in production.
Expand Down
2 changes: 1 addition & 1 deletion docs/sql-ref-syntax-qry-select-groupby.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ aggregate_name ( [ DISTINCT ] expression [ , ... ] ) [ FILTER ( WHERE boolean_ex

* **grouping_expression**

Specifies the critieria based on which the rows are grouped together. The grouping of rows is performed based on
Specifies the criteria based on which the rows are grouped together. The grouping of rows is performed based on
result values of the grouping expressions. A grouping expression may be a column alias, a column position
or an expression.

Expand Down
2 changes: 1 addition & 1 deletion docs/sql-ref-syntax-qry-select-hints.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Hints give users a way to suggest how Spark SQL to use specific approaches to ge

### Partitioning Hints

Partitioning hints allow users to suggest a partitioning stragety that Spark should follow. `COALESCE`, `REPARTITION`,
Partitioning hints allow users to suggest a partitioning strategy that Spark should follow. `COALESCE`, `REPARTITION`,
and `REPARTITION_BY_RANGE` hints are supported and are equivalent to `coalesce`, `repartition`, and
`repartitionByRange` [Dataset APIs](api/scala/org/apache/spark/sql/Dataset.html), respectively. These hints give users
a way to tune performance and control the number of output files in Spark SQL. When multiple partitioning hints are
Expand Down
2 changes: 1 addition & 1 deletion docs/sql-ref.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Spark SQL is Apache Spark's module for working with structured data. This guide
* [Integration with Hive UDFs/UDAFs/UDTFs](sql-ref-functions-udf-hive.html)
* [Identifiers](sql-ref-identifier.html)
* [Literals](sql-ref-literals.html)
* [Null Semanitics](sql-ref-null-semantics.html)
* [Null Semantics](sql-ref-null-semantics.html)
* [SQL Syntax](sql-ref-syntax.html)
* [DDL Statements](sql-ref-syntax-ddl.html)
* [DML Statements](sql-ref-syntax-dml.html)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ public void close() throws IOException {
*
* This method allows a short period for the above to happen (same amount of time as the
* connection timeout, which is configurable). This should be fine for well-behaved
* applications, where they close the connection arond the same time the app handle detects the
* applications, where they close the connection around the same time the app handle detects the
* app has finished.
*
* In case the connection is not closed within the grace period, this method forcefully closes
Expand Down
2 changes: 1 addition & 1 deletion sbin/decommission-worker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ else
fi

# Check if --block-until-exit is set.
# This is done for systems which block on the decomissioning script and on exit
# This is done for systems which block on the decommissioning script and on exit
# shut down the entire system (e.g. K8s).
if [ "$1" == "--block-until-exit" ]; then
shift
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ Table alterTable(
* @param newIdent the new table identifier of the table
* @throws NoSuchTableException If the table to rename doesn't exist or is a view
* @throws TableAlreadyExistsException If the new table name already exists or is a view
* @throws UnsupportedOperationException If the namespaces of old and new identiers do not
* @throws UnsupportedOperationException If the namespaces of old and new identifiers do not
* match (optional)
*/
void renameTable(Identifier oldIdent, Identifier newIdent)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ import org.apache.spark.util.BoundedPriorityQueue
* There are two separate concepts we track:
*
* 1. Phases: These are broad scope phases in query planning, as listed below, i.e. analysis,
* optimizationm and physical planning (just planning).
* optimization and physical planning (just planning).
*
* 2. Rules: These are the individual Catalyst rules that we track. In addition to time, we also
* track the number of invocations and effective invocations.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ case class ShowTablePropertiesExec(
import scala.collection.JavaConverters._
val toRow = RowEncoder(schema).resolveAndBind().createSerializer()

// The reservered properties are accessible through DESCRIBE
// The reserved properties are accessible through DESCRIBE
val properties = catalogTable.properties.asScala
.filter { case (k, v) => !CatalogV2Util.TABLE_RESERVED_PROPERTIES.contains(k) }
propertyKey match {
Expand Down

0 comments on commit 1450b5e

Please sign in to comment.