-
Notifications
You must be signed in to change notification settings - Fork 287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wsh file
overhaul without cross-remote copy and S3
#1790
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (2)
pkg/filestore/blockstore_cache.go (1)
Line range hint
213-225
: Consider keeping size parameter as int64.Changing size from int64 to int could limit the maximum read size on 32-bit systems. While the MaxInt bounds checking is good, consider:
- Keeping size as int64 for consistency with offset
- Only converting to int after bounds checking
-func (entry *CacheEntry) readAt(ctx context.Context, offset int64, size int, readFull bool) (int64, []byte, error) { +func (entry *CacheEntry) readAt(ctx context.Context, offset int64, size int64, readFull bool) (int64, []byte, error) { // ... existing checks ... if readFull { - size = int(min(file.Size-offset, math.MaxInt)) + size = min(file.Size-offset, math.MaxInt64) } - if offset+int64(size) > file.Size { - size = int(min(file.Size-offset, math.MaxInt)) + if offset+size > file.Size { + size = min(file.Size-offset, math.MaxInt64) }pkg/filestore/blockstore_test.go (1)
Enable race detection and enhance concurrent test coverage
The concurrent append test needs improvement in several areas:
- Enable the Go race detector by running tests with
-race
flag- Add assertions to verify the integrity and ordering of appended data
- Test different concurrency patterns (varying number of goroutines, different flush intervals)
- Add stress tests for edge cases in concurrent operations
🔗 Analysis chain
Line range hint
594-623
: Verify concurrent test coverage.The concurrent append test should be verified for potential race conditions. Consider adding more assertions to verify data consistency.
Run this script to check for race condition tests:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check if tests are run with race detector # Look for go test commands with -race flag in CI/CD configs rg -l 'go test.*-race' .github/workflows/Length of output: 43
Script:
#!/bin/bash # Check for Makefiles and other build configs fd Makefile # Search for race-related test patterns and comments rg -i "race|concurrent" --type go -g '*_test.go' -A 2 -B 2 # Look for similar concurrent test patterns ast-grep --pattern 'func Test$_($_, $_) { $$$ var wg sync.WaitGroup $$$ go func() { $$$ }() $$$ }'Length of output: 702
🧹 Nitpick comments (14)
pkg/wshrpc/wshremote/wshremote.go (1)
686-694
: Add deferred file close and error handlingThe file handle should be closed using defer, and errors from both file operations should be handled.
file, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY, createMode) if err != nil { return fmt.Errorf("cannot open file %q: %w", path, err) } +defer file.Close() offsetWriter := io.NewOffsetWriter(file, offset) n, err = offsetWriter.Write(dataBytes[:n]) if err != nil { return fmt.Errorf("cannot write to file %q: %w", path, err) } +if err := file.Sync(); err != nil { + return fmt.Errorf("cannot sync file %q: %w", path, err) +}pkg/filestore/blockstore_cache.go (3)
189-201
: Simplify part index calculations.The part index handling could be simplified while maintaining the same safety checks.
Consider this alternative implementation:
-partIdxI64 := offset / int64(partDataSize) -if partIdxI64 > math.MaxInt { - // too big - return -} -partIdx := int(partIdxI64) +if offset/int64(partDataSize) > math.MaxInt { + return +} +partIdx := int(offset / int64(partDataSize))
229-235
: Improve circular buffer bounds checking.The circular buffer reading logic needs more explicit bounds checking and documentation.
Consider adding comments explaining the bounds checking logic and edge cases:
if file.Opts.Circular { + // Calculate the start offset for circular buffer realDataOffset := int64(0) if file.Size > int64(file.Opts.MaxSize) { realDataOffset = file.Size - int64(file.Opts.MaxSize) } + // Adjust offset and size if reading from before the circular buffer start if offset < realDataOffset { truncateAmt := realDataOffset - offset offset += truncateAmt size = int(max(int64(size)-truncateAmt, 0)) }
260-264
: Optimize read operation type conversions.The current implementation performs multiple type conversions during read operations. Consider pre-calculating common values to reduce conversions.
+ partOffsetI64 := curReadOffset % int64(partDataSize) + if partOffsetI64 > math.MaxInt { + return offset, nil, fmt.Errorf("offset too large") + } partOffset := int(curReadOffset % int64(partDataSize)) amtToRead := min(partDataSize-partOffset, amtLeftToRead) rtnData = append(rtnData, partData[partOffset:partOffset+amtToRead]...) amtLeftToRead -= amtToRead curReadOffset += int64(amtToRead)pkg/util/iochan/iochan.go (5)
20-20
: Consider making channel buffer size configurable.The channel buffer size of 32 seems arbitrary. Consider making this a parameter or a configurable constant to allow tuning based on different use cases.
46-46
: Remove commented debug log.Remove the commented-out debug log statement as it adds noise to the code.
- // log.Printf("ReaderChan: read %d bytes", n)
22-26
: Consider moving callback before channel close.The callback is executed after closing the channel. If the caller needs to know when to stop sending before the channel is closed, consider moving the callback before the close operation.
defer func() { + callback() log.Printf("ReaderChan: closing channel") close(ch) - callback() }()
81-81
: Remove commented debug log.Remove the commented-out debug log statement as it adds noise to the code.
- // log.Printf("WriterChan: wrote %d bytes", n)
80-82
: Remove unnecessary else block.The else block is unnecessary since the if block always returns.
- } else { - // log.Printf("WriterChan: wrote %d bytes", n) - } + }pkg/wshrpc/wshrpctypes.go (3)
22-31
: LGTM! Consider documenting the rationale for the chosen limits.The file size and chunk size constants are well-defined with reasonable values. However, it would be helpful to document why these specific values were chosen, especially for maintenance and future adjustments.
const ( // MaxFileSize is the maximum file size that can be read - MaxFileSize = 50 * 1024 * 1024 // 50M + // 50M was chosen as it balances memory usage with typical file sizes + MaxFileSize = 50 * 1024 * 1024 // MaxDirSize is the maximum number of entries that can be read in a directory - MaxDirSize = 1024 + // 1024 entries provides good coverage for most directory sizes while preventing memory exhaustion + MaxDirSize = 1024 // FileChunkSize is the size of the file chunk to read - FileChunkSize = 16 * 1024 + // 16KB chunks optimize for network packet sizes and memory usage + FileChunkSize = 16 * 1024 // DirChunkSize is the size of the directory chunk to read - DirChunkSize = 128 + // 128 entries per chunk allows for smooth streaming while maintaining responsiveness + DirChunkSize = 128 )
376-391
: Consider adding documentation for FileInfo fields.The
FileInfo
struct is well-designed but could benefit from field documentation, especially for fields likeSupportsMkdir
andReadOnly
that have specific implications.type FileInfo struct { + // Path is the cleaned path that may contain "~" Path string `json:"path"` // cleaned path (may have "~") + // Dir is the directory part of the path with expanded "~" and normalized separators Dir string `json:"dir,omitempty"` // returns the directory part of the path (if this is a a directory, it will be equal to Path). "~" will be expanded, and separators will be normalized to "/" + // Name is the base name of the file or directory Name string `json:"name,omitempty"` + // NotFound indicates if the file or directory was not found NotFound bool `json:"notfound,omitempty"` + // Opts contains file-specific options Opts *FileOptsType `json:"opts,omitempty"` + // Size is the file size in bytes Size int64 `json:"size,omitempty"` + // Meta contains file metadata Meta *FileMeta `json:"meta,omitempty"` + // Mode contains the file mode and permission bits Mode os.FileMode `json:"mode,omitempty"` + // ModeStr is the string representation of Mode ModeStr string `json:"modestr,omitempty"` + // ModTime is the last modification time in Unix timestamp ModTime int64 `json:"modtime,omitempty"` + // IsDir indicates if this is a directory IsDir bool `json:"isdir,omitempty"` + // SupportsMkdir indicates if mkdir operations are supported in this directory SupportsMkdir bool `json:"supportsmkdir,omitempty"` + // MimeType is the detected MIME type of the file MimeType string `json:"mimetype,omitempty"` + // ReadOnly indicates if the file is read-only (not set for directory listings) ReadOnly bool `json:"readonly,omitempty"` }
510-514
: Consider adding validation for FileCopyOpts.The
FileCopyOpts
struct should validate thatMerge
is only used withRecursive
since merging only makes sense for directories.Consider adding a validation method:
func (opts *FileCopyOpts) Validate() error { if opts.Merge && !opts.Recursive { return fmt.Errorf("merge option requires recursive to be enabled") } return nil }pkg/remote/fileshare/wavefs/wavefs.go (1)
143-153
: Optimize memory usage in directory mapping.The
dirMap
uses an empty struct as value but declares it asmap[string]any
. Usingstruct{}
type would be more memory efficient.Apply this change:
- dirMap := make(map[string]any) // the value is max modtime + dirMap := make(map[string]struct{}) // using empty struct for memory efficiencypkg/remote/awsconn/awsconn.go (1)
1-2
: Enhance package documentation.Consider adding more detailed package documentation to explain:
- The package's purpose and main functionalities
- Usage examples
- Any prerequisites or dependencies
-// Description: This package is used to create a connection to AWS services. +// Package awsconn provides functionality for managing AWS service connections and configurations. +// +// It supports: +// - AWS configuration management with profile support +// - Temporary file handling for AWS credentials +// - S3 bucket operations +// +// Usage: +// cfg, err := awsconn.GetConfig(ctx, "aws:myprofile") +// if err != nil { +// // Handle error +// }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
pkg/filestore/blockstore.go
(13 hunks)pkg/filestore/blockstore_cache.go
(7 hunks)pkg/filestore/blockstore_test.go
(15 hunks)pkg/remote/awsconn/awsconn.go
(1 hunks)pkg/remote/fileshare/s3fs/s3fs.go
(1 hunks)pkg/remote/fileshare/wavefs/wavefs.go
(1 hunks)pkg/util/iochan/iochan.go
(1 hunks)pkg/wshrpc/wshremote/wshremote.go
(12 hunks)pkg/wshrpc/wshrpctypes.go
(8 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- pkg/filestore/blockstore.go
- pkg/remote/fileshare/s3fs/s3fs.go
🧰 Additional context used
📓 Learnings (1)
pkg/remote/awsconn/awsconn.go (1)
Learnt from: esimkowitz
PR: wavetermdev/waveterm#1790
File: pkg/remote/awsconn/awsconn.go:32-33
Timestamp: 2025-01-22T01:02:32.621Z
Learning: The AWS connection code path in pkg/remote/awsconn/awsconn.go is currently inactive, so temporary file cleanup is not a critical concern at this time.
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (go)
- GitHub Check: Build for TestDriver.ai
🔇 Additional comments (23)
pkg/wshrpc/wshremote/wshremote.go (4)
30-30
: KeepOneYear
astime.Duration
to prevent potential overflowSince
time.Duration
is anint64
, casting it toint
may cause overflow on 32-bit systems.-OneYear = int(time.Hour * 24 * 365) +OneYear = time.Hour * 24 * 365
Line range hint
73-117
: Well-implemented file streaming with proper chunking and byte range supportThe implementation correctly handles:
- Directory size limits
- Byte range validation
- Parent directory information
- Chunked data transfer
232-234
: Ensure correct units when converting timeout values totime.Duration
The timeout value needs to be multiplied by time.Millisecond.
if opts.Timeout > 0 { - timeout = time.Duration(opts.Timeout) + timeout = time.Duration(opts.Timeout) * time.Millisecond }
360-363
: Good security practice: Path traversal prevention implementedThe code correctly checks for directory traversal attempts using
..
in paths.pkg/filestore/blockstore_cache.go (3)
11-11
: LGTM: Import math package for integer operations.The addition of the math package is appropriate for the MaxInt constants and mathematical operations used in the type conversions.
148-148
: LGTM: Type safety improvement in leftInPart calculation.The explicit conversion to int64 ensures type consistency and prevents potential integer overflow issues when dealing with large file sizes.
Line range hint
165-181
: Verify circular buffer edge cases.While the type conversions improve safety, the circular buffer logic is complex and should be thoroughly tested, especially for:
- Edge cases where offset + data length approaches MaxInt
- Cases where truncation occurs from both front and back
Would you like me to generate test cases covering these scenarios?
pkg/util/iochan/iochan.go (1)
88-90
: LGTM!The drainChannel implementation is efficient, using a simple range loop to consume remaining messages.
pkg/wshrpc/wshrpctypes.go (3)
147-157
: LGTM! Well-structured interface changes for file operations.The file operation methods have been standardized to use the new
FileData
type, with appropriate streaming support where needed. The interface is consistent and follows Go best practices.
522-527
: LGTM! Consistent remote command types.The remote command types are well-aligned with the new file types and maintain a consistent structure.
604-608
: LGTM! Well-integrated block info changes.The addition of
Files
toBlockInfoData
is a good integration with the new file handling system.pkg/remote/fileshare/wavefs/wavefs.go (4)
1-24
: LGTM! Clean package structure with appropriate imports.
25-31
: LGTM! Clean interface implementation pattern.The empty struct pattern is appropriate here as the client is stateless.
278-301
: Error Handling in Delete Method Should Be ImprovedThe event is published after the file deletion but doesn't consider whether the deletion was successful.
312-333
: LGTM! Robust path sanitization.The path cleaning logic properly handles security concerns by preventing path traversal attempts and normalizing paths.
pkg/filestore/blockstore_test.go (3)
21-21
: LGTM! Import changes are correct.The addition of the wshrpc package import aligns with the migration to use
wshrpc.FileOptsType
.
86-86
: LGTM! Type changes are consistent.The migration from
FileOptsType
towshrpc.FileOptsType
is consistently applied across all test functions.Also applies to: 160-160, 174-178, 223-223, 327-327, 355-355, 376-376, 398-398, 487-487, 541-541, 562-562, 594-594, 682-682
309-309
: LGTM! Helper function parameter type change is correct.The change from
int64(len(data))
tolen(data)
aligns with the updatedReadAt
method signature.pkg/remote/awsconn/awsconn.go (5)
32-32
: LGTM!The tempfiles map implementation is correct for the current inactive state of the AWS connection code path.
34-70
: LGTM!The AWS configuration loading implementation is correct with proper profile validation and configuration file handling.
72-89
: LGTM!The temporary file handling implementation is robust with proper error handling for both file creation and writing operations.
91-121
: LGTM!The profile parsing implementation correctly handles both AWS config and credentials files with proper error handling and nil checks.
123-148
:⚠️ Potential issueFix incorrect S3 bucket listing implementation.
The
ListBuckets
operation doesn't support pagination or filtering by region. TheBucketRegion
parameter inListBucketsInput
is invalid and will cause the operation to fail.Apply this diff to fix the implementation:
func ListBuckets(ctx context.Context, client *s3.Client) ([]types.Bucket, error) { - var err error - var output *s3.ListBucketsOutput - var buckets []types.Bucket - region := client.Options().Region - bucketPaginator := s3.NewListBucketsPaginator(client, &s3.ListBucketsInput{BucketRegion: ®ion}) - for bucketPaginator.HasMorePages() { - output, err = bucketPaginator.NextPage(ctx) - log.Printf("output: %v", output) + output, err := client.ListBuckets(ctx, &s3.ListBucketsInput{}) + if err != nil { + var apiErr smithy.APIError + if errors.As(err, &apiErr) && apiErr.ErrorCode() == "AccessDenied" { + fmt.Println("You don't have permission to list buckets for this account.") + return nil, apiErr + } + return nil, fmt.Errorf("Couldn't list buckets for your account. Here's why: %v", err) + } + if output == nil { + return nil, nil + } + return output.Buckets, nil +}Likely invalid or redundant comment.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (3)
frontend/app/view/preview/preview.tsx (1)
782-785
:⚠️ Potential issueAdd URL encoding for path components.
The
formatRemoteUri
method should encode the path to handle special characters safely.As mentioned in a previous review, apply this fix:
async formatRemoteUri(path: string): Promise<string> { const conn = (await globalStore.get(this.connection)) ?? "local"; - return `wsh://${conn}/${path}`; + return `wsh://${conn}/${encodeURIComponent(path)}`; }frontend/app/view/preview/directorypreview.tsx (2)
622-626
: 🛠️ Refactor suggestionEnhance error handling for file deletion.
The current implementation only logs errors to the console. Consider showing a user-friendly error message.
719-728
: 🛠️ Refactor suggestionAdd error handling for file reading.
The file reading operation lacks error handling. Consider adding try-catch and showing appropriate error messages.
🧹 Nitpick comments (11)
pkg/wshrpc/wshrpctypes.go (5)
23-32
: Enhance constant documentation with units.The constants would benefit from more detailed documentation that includes their units.
const ( - // MaxFileSize is the maximum file size that can be read + // MaxFileSize is the maximum file size that can be read (50MB) MaxFileSize = 50 * 1024 * 1024 // 50M - // MaxDirSize is the maximum number of entries that can be read in a directory + // MaxDirSize is the maximum number of entries that can be read in a directory (1024 entries) MaxDirSize = 1024 - // FileChunkSize is the size of the file chunk to read + // FileChunkSize is the size of each file chunk to read (16KB) FileChunkSize = 16 * 1024 - // DirChunkSize is the size of the directory chunk to read + // DirChunkSize is the number of directory entries to read in each chunk (128 entries) DirChunkSize = 128 )
149-160
: Add return type documentation for file operations.The file operation methods would benefit from documentation describing their return types and error conditions.
For example:
// FileMkdirCommand creates a directory at the specified path. // Returns an error if the directory already exists or if creation fails. FileMkdirCommand(ctx context.Context, data FileData) error // FileInfoCommand retrieves file information for the specified path. // Returns *FileInfo containing file metadata or an error if the file doesn't exist. FileInfoCommand(ctx context.Context, data FileData) (*FileInfo, error)
379-394
: Improve FileInfo.Dir field documentation.The
Dir
field documentation could be more explicit about path normalization rules.type FileInfo struct { Path string `json:"path"` // cleaned path (may have "~") - Dir string `json:"dir,omitempty"` // returns the directory part of the path (if this is a a directory, it will be equal to Path). "~" will be expanded, and separators will be normalized to "/" + Dir string `json:"dir,omitempty"` // returns the normalized directory path with: 1) "~" expanded to home directory, 2) all separators converted to "/", 3) equal to Path if this is a directory Name string `json:"name,omitempty"` // ... rest of the fields }
513-517
: Add validation for FileCopyOpts.Timeout.The
Timeout
field should be validated to prevent negative values.Consider adding a validation method:
func (opts *FileCopyOpts) Validate() error { if opts.Timeout < 0 { return fmt.Errorf("timeout must be non-negative, got %v", opts.Timeout) } return nil }
607-611
: Document Files field relationship with Block.The
Files
field inBlockInfoData
needs documentation explaining its relationship with theBlock
field.type BlockInfoData struct { BlockId string `json:"blockid"` TabId string `json:"tabid"` WorkspaceId string `json:"workspaceid"` Block *waveobj.Block `json:"block"` - Files []*FileInfo `json:"files"` + Files []*FileInfo `json:"files"` // List of files associated with this block's execution or output }frontend/app/store/wshclientapi.ts (1)
160-164
: Inconsistent parameter types inFileCopyCommand
andRemoteFileCopyCommand
While most file operation commands now use
FileData
as the parameter type,FileCopyCommand
usesCommandFileCopyData
, andRemoteFileCopyCommand
usesCommandRemoteFileCopyData
. For consistency and to maintain a unified API, consider updating these methods to useFileData
if applicable. If there is a specific reason to use different data types, ensure that it is well-documented.Also applies to: 250-254
pkg/wshrpc/wshclient/wshclient.go (1)
197-202
: Inconsistent parameter types inFileCopyCommand
andRemoteFileCopyCommand
While other file operation commands use
wshrpc.FileData
,FileCopyCommand
useswshrpc.CommandFileCopyData
, andRemoteFileCopyCommand
useswshrpc.CommandRemoteFileCopyData
. Consider updating these functions to usewshrpc.FileData
for consistency, or ensure that the different types are necessary and well-documented.Also applies to: 304-308
pkg/wshutil/wshrpc.go (1)
736-756
: Optimize retry mechanism to avoid busy-waiting.The
retrySendTimeout
function uses a busy-wait loop with a fixed sleep duration, which may not be optimal for system resources. Consider using exponential backoff or a more efficient notification mechanism.Apply this diff to implement exponential backoff:
func (w *WshRpc) retrySendTimeout(resId string) { + backoff := 100 * time.Millisecond + maxBackoff := 1 * time.Second for { done := func() bool { w.Lock.Lock() defer w.Lock.Unlock() if w.ServerDone { return true } select { case w.CtxDoneCh <- resId: return true default: return false } }() if done { return } - time.Sleep(100 * time.Millisecond) + time.Sleep(backoff) + backoff = time.Duration(float64(backoff) * 1.5) + if backoff > maxBackoff { + backoff = maxBackoff + } } }frontend/app/view/preview/preview.tsx (1)
604-609
: Enhance error handling in handleFileSave.While the implementation is correct, consider surfacing the error to the user interface for better UX.
try { await RpcApi.FileWriteCommand(TabRpcClient, { info: { path: await this.formatRemoteUri(filePath), }, data64: stringToBase64(newFileContent), }); globalStore.set(this.fileContent, newFileContent); globalStore.set(this.newFileContent, null); console.log("saved file", filePath); } catch (error) { console.error("Error saving file:", error); + // Consider adding user-facing error notification + globalStore.set(this.connectionError, `Failed to save file: ${error.message}`); }cmd/wsh/cmd/wshcmd-file.go (2)
36-36
: Consider using a more precise calculation forTimeoutYear
.The current calculation of 365 days doesn't account for leap years. For more precise timing, consider using
8760 * time.Hour
(365 days) or8766 * time.Hour
(365.25 days).
375-401
: Consider adding progress feedback for large file transfers.The implementation is solid, but given the long timeout, consider adding progress feedback for better user experience during large file transfers.
func fileCpRun(cmd *cobra.Command, args []string) error { src, dst := args[0], args[1] recursive, err := cmd.Flags().GetBool("recursive") if err != nil { return err } merge, err := cmd.Flags().GetBool("merge") if err != nil { return err } force, err := cmd.Flags().GetBool("force") if err != nil { return err } srcPath, err := fixRelativePaths(src) if err != nil { return fmt.Errorf("unable to parse src path: %w", err) } destPath, err := fixRelativePaths(dst) if err != nil { return fmt.Errorf("unable to parse dest path: %w", err) } log.Printf("Copying %s to %s; recursive: %v, merge: %v, force: %v", srcPath, destPath, recursive, merge, force) rpcOpts := &wshrpc.RpcOpts{Timeout: TimeoutYear} + fmt.Printf("Copying files... This may take a while for large transfers.\n") err = wshclient.FileCopyCommand(RpcClient, wshrpc.CommandFileCopyData{SrcUri: srcPath, DestUri: destPath, Opts: &wshrpc.FileCopyOpts{Recursive: recursive, Merge: merge, Overwrite: force, Timeout: TimeoutYear}}, rpcOpts) if err != nil { return fmt.Errorf("copying file: %w", err) } + fmt.Printf("Copy completed successfully.\n") return nil }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
cmd/wsh/cmd/wshcmd-file-util.go
(10 hunks)cmd/wsh/cmd/wshcmd-file.go
(12 hunks)frontend/app/store/wshclientapi.ts
(6 hunks)frontend/app/view/preview/directorypreview.tsx
(7 hunks)frontend/app/view/preview/preview.tsx
(7 hunks)pkg/remote/fileshare/wavefs/wavefs.go
(1 hunks)pkg/remote/fileshare/wshfs/wshfs.go
(1 hunks)pkg/wshrpc/wshclient/wshclient.go
(7 hunks)pkg/wshrpc/wshremote/wshremote.go
(12 hunks)pkg/wshrpc/wshrpctypes.go
(10 hunks)pkg/wshrpc/wshserver/wshserver.go
(9 hunks)pkg/wshutil/wshrpc.go
(10 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- pkg/remote/fileshare/wavefs/wavefs.go
- pkg/remote/fileshare/wshfs/wshfs.go
🧰 Additional context used
🪛 golangci-lint (1.62.2)
cmd/wsh/cmd/wshcmd-file.go
97-97: type waveFileRef
is unused
(unused)
102-102: func resolveWaveFile
is unused
(unused)
166-166: var fileCpCmd
is unused
(unused)
525-525: SA4004: the surrounding loop is unconditionally terminated
(staticcheck)
pkg/wshrpc/wshremote/wshremote.go
231-231: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
(govet)
325-325: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
(govet)
256-256: SA4009: argument err is overwritten before first use
(staticcheck)
258-258: SA4009(related information): assignment to err
(staticcheck)
cmd/wsh/cmd/wshcmd-file-util.go
219-219: ineffectual assignment to fixedPath
(ineffassign)
🔇 Additional comments (31)
pkg/wshrpc/wshrpctypes.go (1)
396-400
: Document IJsonBudget field purpose and constraints.The
IJsonBudget
field lacks documentation about its purpose, valid values, and default behavior.Please clarify:
- What is the purpose of IJsonBudget?
- What are the valid values?
- What happens when the budget is exceeded?
✅ Verification successful
Document IJsonBudget field purpose and constraints
The
IJsonBudget
field inFileOptsType
controls resource limits for incremental JSON operations. Requirements:
- Must be used with
IJson: true
- Must be a non-negative integer
- A value of 0 means no budget limit
The budget applies to all incremental JSON operations (set/delete/append) performed on the file.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for IJsonBudget usage to understand its constraints rg "IJsonBudget" -A 5Length of output: 1224
Script:
#!/bin/bash # Search for CompactIJson implementation and documentation ast-grep --pattern 'func CompactIJson($$$)' # Also search for any comments or documentation about IJson or budget rg -B 3 "CompactIJson|ijson budget" --type goLength of output: 2743
Script:
#!/bin/bash # Search for ApplyCommand implementation and any related documentation ast-grep --pattern 'func ApplyCommand($$$)' # Look for any documentation about ijson commands rg -B 5 "type Command" --type goLength of output: 9953
frontend/app/store/wshclientapi.ts (5)
151-153
: Consistent use ofFileData
across file operation commandsThe update to use
FileData
as the parameter type inFileAppendCommand
,FileCreateCommand
,FileDeleteCommand
,FileInfoCommand
,FileReadCommand
,FileWriteCommand
, andRemoteWriteFileCommand
methods enhances consistency and simplifies the API for file operations.Also applies to: 166-168, 171-173, 176-178, 196-198, 206-208, 316-318
185-188
: Verify the return types of streaming commandsThe methods
FileListStreamCommand
andRemoteListEntriesCommand
returnAsyncGenerator<CommandRemoteListEntriesRtnData, void, boolean>
. Ensure that these return types are correct and consistent with other streaming methods, and thatCommandRemoteListEntriesRtnData
is the intended return type.Also applies to: 228-231, 291-294, 352-355
190-194
: Addition ofFileMkdirCommand
enhances directory managementThe new
FileMkdirCommand
method adds functionality for creating directories, which extends the file operation capabilities of the API.
215-219
: Addition ofGetFullConfigCommand
improves configuration accessThe
GetFullConfigCommand
method allows clients to retrieve the full configuration, which is beneficial for comprehensive configuration management.
306-309
: Update return types toFileData
for consistencyThe return types of
RemoteStreamFileCommand
andRemoteStreamFileCommand
have been updated toFileData
, aligning them with other file streaming methods and promoting consistency.Also applies to: 368-371
pkg/wshrpc/wshclient/wshclient.go (6)
11-11
: Addition ofwconfig
package importThe import of
"github.com/wavetermdev/waveterm/pkg/wconfig"
indicates the use of configuration types within this file, which is appropriate given the addition ofGetFullConfigCommand
.
186-188
: Consistent use ofwshrpc.FileData
in file operationsThe update to use
wshrpc.FileData
as the parameter type inFileAppendCommand
,FileCreateCommand
,FileDeleteCommand
,FileInfoCommand
,FileListCommand
,FileReadCommand
,FileWriteCommand
, andRemoteWriteFileCommand
methods enhances consistency across file operations.Also applies to: 204-206, 210-212, 216-218, 222-224, 239-241, 250-252, 378-380
227-231
: Verify return types of streaming commandsThe methods
FileListStreamCommand
andRemoteListEntriesCommand
return channels ofwshrpc.RespOrErrorUnion[wshrpc.CommandRemoteListEntriesRtnData]
. Ensure that these return types are appropriate and consistent with other streaming methods.Also applies to: 352-355
233-237
: Addition ofFileMkdirCommand
enhances directory managementThe new
FileMkdirCommand
function provides the capability to create directories, enhancing the file management features of the client.
262-266
: Addition ofGetFullConfigCommand
improves configuration accessThe
GetFullConfigCommand
function allows retrieval of the full configuration, which is beneficial for clients needing comprehensive configuration data.
368-371
: Update ofRemoteStreamFileCommand
return type towshrpc.FileData
The return type of
RemoteStreamFileCommand
has been updated towshrpc.FileData
, aligning it with other methods and enhancing consistency.cmd/wsh/cmd/wshcmd-file-util.go (6)
Line range hint
31-48
: Rename of functions and parameter type updateThe function
ensureWaveFile
has been renamed toensureFile
, and the parameter type has been changed fromwshrpc.CommandFileData
towshrpc.FileData
. This update reflects the new data structure and enhances clarity in file operations.
Line range hint
51-90
: Rename and update ofstreamWriteToFile
functionThe function has been renamed from
streamWriteToWaveFile
tostreamWriteToFile
and updated to usewshrpc.FileData
. This improves consistency with the updated data types and function naming conventions.
Line range hint
93-118
: Rename and update ofstreamReadFromFile
functionSimilarly,
streamReadFromWaveFile
has been renamed tostreamReadFromFile
, with parameter updates to usewshrpc.FileData
, aligning the function with the new data structures.
130-130
: Update tofileListResult
structThe
info
field now uses*wshrpc.FileInfo
instead of*wshrpc.WaveFileInfo
, reflecting the updated file information structure and ensuring compatibility.
142-145
: Construction ofFileData
with updatedFileInfo
The updated
fileData
construction useswshrpc.FileData
with anInfo
field, aligning with the new data structures and improving data handling consistency.
172-177
: Update to usewshrpc.FileListData
and adjusted optionsThe changes update the file listing data structure to
wshrpc.FileListData
and adjust options withinstreamFileList
, ensuring compatibility with the new API and enhancing functionality.pkg/wshutil/wshrpc.go (2)
726-734
: LGTM! Good cleanup insetServerDone
.The function properly handles cleanup by:
- Setting the server status
- Closing the channel
- Draining any remaining messages
677-679
: LGTM! Good server status check.The
SendComplexRequest
function now properly checks if the server is still running before attempting to send new requests.pkg/wshrpc/wshremote/wshremote.go (1)
356-359
: LGTM! Good security check for path traversal.The code properly checks for directory traversal attempts using
..
in paths during tar extraction.pkg/wshrpc/wshserver/wshserver.go (1)
283-329
: LGTM! Clean implementation of file operations.The transition from
filestore
tofileshare
package is well-implemented with:
- Consistent error handling
- Clear function signatures
- Proper use of context
frontend/app/view/preview/preview.tsx (2)
142-142
: LGTM! Clean transition to the new FileData type.The refactoring from
FullFile
toFileData
type and the corresponding implementation usingRpcApi.FileReadCommand
looks good. The null checks and error handling are properly implemented.Also applies to: 387-397
371-376
: LGTM! Clean implementation of file info retrieval.The transition to using
RpcApi.FileInfoCommand
is well implemented with proper path handling throughformatRemoteUri
.cmd/wsh/cmd/wshcmd-file.go (6)
38-65
: Well-structured and comprehensive URI help text!The help text provides clear explanations and examples for each supported URI scheme, making it user-friendly.
91-94
: Verify if commenting out thecp
command is intentional.The
cp
command flags are commented out, which makes the command unusable. Based on the PR objectives to separate S3 and cross-remote copy features, is this intentional? If not, consider uncommenting these lines to maintain basic copy functionality.
Line range hint
177-199
: Clean and robust implementation!The function handles path resolution, error checking, and file streaming in a clear and logical manner.
🧰 Tools
🪛 golangci-lint (1.62.2)
97-97: type
waveFileRef
is unused(unused)
102-102: func
resolveWaveFile
is unused(unused)
166-166: var
fileCpCmd
is unused(unused)
221-234
: Well-structured output formatting!The use of tabwriter ensures clean, aligned output, and the metadata handling is thorough.
Line range hint
406-433
: Robust column printing implementation!The function handles terminal width detection gracefully with proper fallbacks and error handling.
Line range hint
435-486
: Clean and consistent long format printing!The implementation provides well-aligned output with proper handling of different entry types.
frontend/app/view/preview/directorypreview.tsx (1)
734-740
: Well-implemented file filtering!The implementation handles hidden files, case-insensitive search, and undefined data gracefully.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
cmd/wsh/cmd/wshcmd-file.go (1)
Line range hint
435-453
: Fix unused maxNameLen infilePrintLong
.The
maxNameLen
variable is declared but never used for calculating the name width.Apply this diff:
maxNameLen := 0 var samples []*wshrpc.FileInfo for respUnion := range filesChan { if respUnion.Error != nil { return respUnion.Error } resp := respUnion.Response samples = append(samples, resp.FileInfo...) + for _, f := range resp.FileInfo { + if len(f.Name) > maxNameLen { + maxNameLen = len(f.Name) + } + } }
♻️ Duplicate comments (2)
pkg/wshrpc/wshremote/wshremote.go (2)
231-232
:⚠️ Potential issueFix context leak in
RemoteTarStreamCommand
.The cancel function returned by
context.WithTimeout
is discarded, which can lead to context leak.Apply this diff to properly handle context cancellation:
- readerCtx, _ := context.WithTimeout(context.Background(), timeout) + readerCtx, cancel := context.WithTimeout(context.Background(), timeout) + defer cancel() rtn := iochan.ReaderChan(readerCtx, pipeReader, wshrpc.FileChunkSize, func() {
325-326
:⚠️ Potential issueFix context leak in
RemoteFileCopyCommand
.The cancel function returned by
context.WithTimeout
is discarded, which can lead to context leak.Apply this diff to properly handle context cancellation:
- readCtx, _ := context.WithTimeout(ctx, timeout) + readCtx, cancel := context.WithTimeout(ctx, timeout) + defer cancel() readCtx, cancel := context.WithCancelCause(readCtx)
🧹 Nitpick comments (6)
cmd/wsh/cmd/wshcmd-file.go (3)
36-36
: Simplify TimeoutYear using time.Duration.The current implementation is error-prone and hard to read. Use time.Duration for better clarity and type safety.
Apply this diff:
-TimeoutYear = 365 * 24 * 60 * 60 * 1000 +TimeoutYear = int(365 * 24 * time.Hour / time.Millisecond)
97-102
: Remove unused type and function.The type
waveFileRef
and functionresolveWaveFile
are no longer used and should be removed.Apply this diff:
-type waveFileRef struct { - zoneId string - fileName string -} - -func resolveWaveFile(ref *waveFileRef) (*waveobj.ORef, error) { - return resolveSimpleId(ref.zoneId) -}🧰 Tools
🪛 golangci-lint (1.62.2)
97-97: type
waveFileRef
is unused(unused)
102-102: func
resolveWaveFile
is unused(unused)
397-397
: Replace direct logging with proper debug logging.Using
log.Printf
directly in production code is not recommended. Consider using a proper logging framework or debug flags.pkg/wshrpc/wshremote/wshremote.go (2)
184-209
: Consider adding channel cleanup and documentation.A few suggestions to improve the implementation:
- Add documentation explaining the choice of channel buffer size (16).
- Consider adding a cleanup mechanism using
context.Done()
to prevent resource leaks if the client disconnects early.func (impl *ServerImpl) RemoteStreamFileCommand(ctx context.Context, data wshrpc.CommandRemoteStreamFileData) chan wshrpc.RespOrErrorUnion[wshrpc.FileData] { + // Buffer size of 16 chosen to balance memory usage with streaming performance ch := make(chan wshrpc.RespOrErrorUnion[wshrpc.FileData], 16) go func() { defer close(ch) + // Create a done channel to handle early client disconnection + done := make(chan struct{}) + go func() { + <-ctx.Done() + close(done) + }() err := impl.remoteStreamFileInternal(ctx, data, func(fileInfo []*wshrpc.FileInfo, data []byte, byteRange ByteRangeType) { - resp := wshrpc.FileData{} + select { + case <-done: + return + default: + resp := wshrpc.FileData{}
449-467
: Optimize directory traversal and error handling.The WalkDir implementation could be improved in terms of error handling and performance:
- Add proper error handling for the deferred increment
- Consider using a more efficient approach for large directories
Apply this diff to improve the implementation:
+ var walkErr error fs.WalkDir(os.DirFS(path), ".", func(path string, d fs.DirEntry, err error) error { - defer func() { + defer func() { + if r := recover(); r != nil { + walkErr = fmt.Errorf("panic in WalkDir: %v", r) + } seen++ }() if seen < data.Opts.Offset { return nil } if seen >= data.Opts.Offset+data.Opts.Limit { return io.EOF } if err != nil { + walkErr = err return err }pkg/wshrpc/wshrpctypes.go (1)
512-516
: Add validation for FileCopyOpts timeout.The
FileCopyOpts
struct includes a timeout field but lacks documentation about its units and validation for negative values.Apply this diff to improve the struct:
type FileCopyOpts struct { Overwrite bool `json:"overwrite,omitempty"` Recursive bool `json:"recursive,omitempty"` Merge bool `json:"merge,omitempty"` - Timeout int `json:"timeout,omitempty"` + Timeout int `json:"timeout,omitempty"` // timeout in milliseconds, must be positive }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
cmd/wsh/cmd/wshcmd-file-util.go
(9 hunks)cmd/wsh/cmd/wshcmd-file.go
(10 hunks)pkg/filestore/blockstore.go
(7 hunks)pkg/filestore/blockstore_test.go
(14 hunks)pkg/remote/fileshare/wavefs/wavefs.go
(1 hunks)pkg/remote/fileshare/wshfs/wshfs.go
(1 hunks)pkg/wshrpc/wshremote/wshremote.go
(12 hunks)pkg/wshrpc/wshrpctypes.go
(9 hunks)pkg/wshutil/wshrpc.go
(9 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- pkg/wshutil/wshrpc.go
- pkg/remote/fileshare/wshfs/wshfs.go
- pkg/remote/fileshare/wavefs/wavefs.go
🧰 Additional context used
🪛 golangci-lint (1.62.2)
cmd/wsh/cmd/wshcmd-file.go
97-97: type waveFileRef
is unused
(unused)
102-102: func resolveWaveFile
is unused
(unused)
166-166: var fileCpCmd
is unused
(unused)
525-525: SA4004: the surrounding loop is unconditionally terminated
(staticcheck)
cmd/wsh/cmd/wshcmd-file-util.go
218-218: ineffectual assignment to fixedPath
(ineffassign)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Analyze (javascript-typescript)
- GitHub Check: Analyze (go)
- GitHub Check: Build for TestDriver.ai
🔇 Additional comments (16)
pkg/filestore/blockstore.go (7)
15-15
: LGTM!The added imports are correctly used in the code.
Also applies to: 22-22
56-59
: LGTM!The field types have been correctly updated to use the
wshrpc
package types.Also applies to: 62-64
86-87
: LGTM!The function signature has been correctly updated to use
wshrpc.FileMeta
.
115-115
: LGTM!The function signature has been correctly updated to use
wshrpc
types.
206-206
: LGTM!The function signature has been correctly updated to use
wshrpc.FileMeta
.
285-285
: LGTM!The map initialization has been correctly updated to use
wshrpc.FileMeta
.
371-373
: LGTM!Added proper size validation to prevent integer overflow issues.
pkg/filestore/blockstore_test.go (1)
21-21
: LGTM!The test file has been correctly updated to use
wshrpc
types, maintaining the same test coverage and functionality.Also applies to: 86-86, 160-160, 174-174, 178-178, 223-223, 327-327, 355-355, 376-376, 398-398, 487-487, 541-541, 562-562, 594-594, 682-682
cmd/wsh/cmd/wshcmd-file-util.go (4)
Line range hint
30-46
: LGTM! Clean refactoring of the ensure function.The function has been successfully refactored with proper error handling and clear error messages.
Line range hint
50-90
: LGTM! Well-structured streaming implementation.The function implements efficient streaming with proper chunking (32KB), size limits, and error handling.
Line range hint
92-125
: LGTM! Efficient streaming read implementation.The function implements proper chunking and error handling for streaming reads.
212-232
:⚠️ Potential issueFix ineffectual assignment in
fixRelativePaths
.The assignment to
fixedPath
at line 218 is never used.Apply this diff to fix the issue:
func fixRelativePaths(path string) (string, error) { conn, err := connparse.ParseURI(path) if err != nil { return "", err } if conn.Scheme == connparse.ConnectionTypeWsh { - fixedPath := conn.Path if conn.Host == connparse.ConnHostCurrent { conn.Host = RpcContext.Conn fixedPath, err := fileutil.FixPath(conn.Path) if err != nil { return "", err } conn.Path = fixedPath } if conn.Host == "" { conn.Host = wshrpc.LocalConnName } } return conn.GetFullURI(), nil }
Likely invalid or redundant comment.
🧰 Tools
🪛 golangci-lint (1.62.2)
218-218: ineffectual assignment to fixedPath
(ineffassign)
cmd/wsh/cmd/wshcmd-file.go (2)
38-65
: LGTM! Comprehensive URI help text.The help text is well-structured, providing clear examples and explanations for each supported URI scheme.
91-94
: Uncomment thecp
command to register it with the CLI.The
fileCpCmd
command and its flags are currently commented out, preventing users from using thecp
command.pkg/wshrpc/wshrpctypes.go (2)
23-24
: Update MaxFileSize comment to match the actual value.The comment should reflect that the maximum file size is 50MB, not 10MB.
Apply this diff to fix the inconsistency:
- MaxFileSize = 50 * 1024 * 1024 // 50M + MaxFileSize = 50 * 1024 * 1024 // 50MB
366-369
: Add validation for FileDataAt struct.The
FileDataAt
struct lacks validation for its fields. Consider adding validation to ensureOffset
is non-negative andSize
is positive when used.Run this script to check for potential validation issues:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (6)
pkg/wshutil/wshrpc.go (3)
333-359
: Consider adding error logging for timeout cases.The server loop's error handling for timeouts could benefit from more detailed logging, especially in production environments.
case resIdTimeout = <-w.CtxDoneCh: if w.Debug { log.Printf("[%s] received request timeout: %s\n", w.DebugName, resIdTimeout) } + log.Printf("Request timeout occurred for resId: %s\n", resIdTimeout) w.unregisterRpc(resIdTimeout, fmt.Errorf("EC-TIME: timeout waiting for response")) continue
736-756
: Consider adding a maximum retry limit.The
retrySendTimeout
method could potentially loop indefinitely. Consider adding a maximum retry limit to prevent excessive resource consumption.func (w *WshRpc) retrySendTimeout(resId string) { + maxRetries := 50 // 5 seconds total with 100ms sleep + retries := 0 for { + if retries >= maxRetries { + log.Printf("Maximum retries reached for timeout notification: %s\n", resId) + return + } done := func() bool { w.Lock.Lock() defer w.Lock.Unlock() if w.ServerDone { return true } select { case w.CtxDoneCh <- resId: return true default: return false } }() if done { return } + retries++ time.Sleep(100 * time.Millisecond) } }
726-734
: Consider adding metrics for graceful shutdown.The
setServerDone
method handles shutdown well, but adding metrics about remaining requests during shutdown could be valuable for monitoring.func (w *WshRpc) setServerDone() { w.Lock.Lock() defer w.Lock.Unlock() w.ServerDone = true + remainingRequests := len(w.RpcMap) + if remainingRequests > 0 { + log.Printf("Server shutting down with %d pending requests\n", remainingRequests) + } close(w.CtxDoneCh) for range w.CtxDoneCh { // drain channel } }pkg/wshrpc/wshrpctypes.go (2)
148-159
: Add godoc comments to interface methods.While the methods are well-designed with consistent parameter and return types, they would benefit from godoc comments explaining their purpose, parameters, and return values.
Example documentation format:
+// FileMkdirCommand creates a new directory at the specified path. +// It returns an error if the directory already exists or cannot be created. FileMkdirCommand(ctx context.Context, data FileData) error +// RemoteStreamFileCommand streams file data from a remote location. +// It returns a channel that receives file data chunks or errors. RemoteStreamFileCommand(ctx context.Context, data CommandRemoteStreamFileData) chan RespOrErrorUnion[FileData]Also applies to: 197-205
512-516
: Document the unit of FileCopyOpts.Timeout.The
Timeout
field lacks documentation about its unit (seconds/milliseconds). Please add a comment to clarify the expected unit.type FileCopyOpts struct { Overwrite bool `json:"overwrite,omitempty"` Recursive bool `json:"recursive,omitempty"` Merge bool `json:"merge,omitempty"` - Timeout int64 `json:"timeout,omitempty"` + Timeout int64 `json:"timeout,omitempty"` // timeout in milliseconds }cmd/wsh/cmd/wshcmd-file.go (1)
36-36
: Improve timeout definition using time.Duration.The
TimeoutYear
constant could be more readable and maintainable usingtime.Duration
.Apply this diff to improve the timeout definition:
- TimeoutYear = int64(365) * 24 * 60 * 60 * 1000 + TimeoutYear = int64(365 * 24 * time.Hour / time.Millisecond)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
cmd/wsh/cmd/wshcmd-file-util.go
(10 hunks)cmd/wsh/cmd/wshcmd-file.go
(12 hunks)pkg/wshrpc/wshrpctypes.go
(10 hunks)pkg/wshutil/wshrpc.go
(10 hunks)
🧰 Additional context used
🪛 golangci-lint (1.62.2)
cmd/wsh/cmd/wshcmd-file-util.go
218-218: ineffectual assignment to fixedPath
(ineffassign)
cmd/wsh/cmd/wshcmd-file.go
97-97: type waveFileRef
is unused
(unused)
102-102: func resolveWaveFile
is unused
(unused)
166-166: var fileCpCmd
is unused
(unused)
525-525: SA4004: the surrounding loop is unconditionally terminated
(staticcheck)
🔇 Additional comments (13)
pkg/wshutil/wshrpc.go (4)
27-27
: LGTM! Well-structured additions for improved timeout handling.The new
CtxDoneChSize
constant and the correspondingCtxDoneCh
channel, along with theServerDone
flag, provide a robust mechanism for handling request timeouts and server state.Also applies to: 48-48, 57-57
Line range hint
212-221
: LGTM! Proper initialization of new components.The initialization in
MakeWshRpc
correctly sets up the new timeout channel and maintains thread safety with proper synchronization.
442-448
: LGTM! Improved error handling with non-blocking send.The non-blocking send pattern in
unregisterRpc
is a good improvement to prevent deadlocks in the main loop.
677-679
: LGTM! Proper server state check.The early check for server state in
SendComplexRequest
prevents unnecessary processing of requests when the server is shutting down.pkg/wshrpc/wshrpctypes.go (4)
22-31
: LGTM! Well-defined constants with appropriate values.The file size limits and chunk sizes are reasonable for their intended purposes. The constants are properly documented.
Line range hint
59-80
: LGTM! Well-organized command constants.The new commands follow a consistent naming pattern and provide a comprehensive set of operations for file handling and configuration management.
606-610
: LGTM! Well-structured BlockInfoData type.The type includes all necessary fields for block information and maintains consistency with the rest of the codebase.
378-393
: Consider portability of os.FileMode across different systems.While
FileInfo.ModeStr
provides a portable string representation,FileInfo.Mode
directly exposesos.FileMode
which might behave differently across systems. Consider if this field should be exposed in the RPC interface.Run this script to check how os.FileMode is handled across different systems:
cmd/wsh/cmd/wshcmd-file-util.go (4)
Line range hint
50-89
: LGTM! Well-structured file write implementation.The implementation correctly handles:
- Initial file truncation
- Chunked reading and writing
- Size limit validation
- Base64 encoding for data transmission
Line range hint
92-124
: LGTM! Well-structured file read implementation.The implementation correctly handles:
- Chunked reading with proper offset management
- Base64 decoding
- Error handling for both read and write operations
Line range hint
134-210
: LGTM! Comprehensive file listing implementation.The implementation correctly handles:
- Single file lookup vs directory listing
- Pagination for large directories
- Error propagation
- Path prefix management
212-232
:⚠️ Potential issueFix ineffectual assignment to
fixedPath
.The variable
fixedPath
is assigned at line 218 but not used until line 221, and the assignment at line 218 is overwritten at line 221.Apply this diff to fix the issue:
func fixRelativePaths(path string) (string, error) { conn, err := connparse.ParseURI(path) if err != nil { return "", err } if conn.Scheme == connparse.ConnectionTypeWsh { - fixedPath := conn.Path if conn.Host == connparse.ConnHostCurrent { conn.Host = RpcContext.Conn fixedPath, err := fileutil.FixPath(conn.Path) if err != nil { return "", err } conn.Path = fixedPath } if conn.Host == "" { conn.Host = wshrpc.LocalConnName } } return conn.GetFullURI(), nil }
Likely invalid or redundant comment.
🧰 Tools
🪛 golangci-lint (1.62.2)
218-218: ineffectual assignment to fixedPath
(ineffassign)
cmd/wsh/cmd/wshcmd-file.go (1)
517-525
:⚠️ Potential issueFix premature loop termination in one-per-line output mode.
The return statement inside the loop causes it to exit after processing only the first batch of files.
Apply this diff to fix the issue:
if onePerLine { for respUnion := range filesChan { if respUnion.Error != nil { log.Printf("error: %v", respUnion.Error) return respUnion.Error } for _, f := range respUnion.Response.FileInfo { fmt.Fprintln(os.Stdout, f.Name) } - return nil } + return nil }Likely invalid or redundant comment.
🧰 Tools
🪛 golangci-lint (1.62.2)
525-525: SA4004: the surrounding loop is unconditionally terminated
(staticcheck)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
cmd/wsh/cmd/wshcmd-file.go (1)
521-529
:⚠️ Potential issueFix Premature Loop Termination in
fileListRun
The
return nil
statement inside the loop causes the function to exit after processing only the first batch of files. This prevents listing all files when using the--one
flag.Apply this diff to remove the premature return:
if onePerLine { for respUnion := range filesChan { if respUnion.Error != nil { log.Printf("error: %v", respUnion.Error) return respUnion.Error } for _, f := range respUnion.Response.FileInfo { fmt.Fprintln(os.Stdout, f.Name) } - return nil } + return nil }🧰 Tools
🪛 golangci-lint (1.62.2)
529-529: SA4004: the surrounding loop is unconditionally terminated
(staticcheck)
🧹 Nitpick comments (1)
cmd/wsh/cmd/wshcmd-file.go (1)
167-170
: Remove UnusedfileCpCmd
andfileCpRun
FunctionThe
fileCpCmd
variable is declared but commented out, and the correspondingfileCpRun
function is present but not used. This results in unused code in the codebase.Consider removing or commenting out the
fileCpRun
function to clean up unused code until thecp
command is ready to be enabled.- var fileCpCmd = &cobra.Command{ - Use: "cp [source-uri] [destination-uri]" + UriHelpText, - Short: "copy files between storage systems", - Long: "Copy files between different storage systems." + UriHelpText, - Example: " wsh file cp wavefile://block/config.txt ./local-config.txt\n wsh file cp ./local-config.txt wavefile://block/config.txt\n wsh file cp wsh://user@ec2/home/user/config.txt wavefile://client/config.txt", - Args: cobra.ExactArgs(2), - RunE: activityWrap("file", fileCpRun), - PreRunE: preRunSetupRpcClient, - } - - func fileCpRun(cmd *cobra.Command, args []string) error { - src, dst := args[0], args[1] - recursive, err := cmd.Flags().GetBool("recursive") - if err != nil { - return err - } - merge, err := cmd.Flags().GetBool("merge") - if err != nil { - return err - } - force, err := cmd.Flags().GetBool("force") - if err != nil { - return err - } - - srcPath, err := fixRelativePaths(src) - if err != nil { - return fmt.Errorf("unable to parse src path: %w", err) - } - destPath, err := fixRelativePaths(dst) - if err != nil { - return fmt.Errorf("unable to parse dest path: %w", err) - } - log.Printf("Copying %s to %s; recursive: %v, merge: %v, force: %v", srcPath, destPath, recursive, merge, force) - rpcOpts := &wshrpc.RpcOpts{Timeout: TimeoutYear} - err = wshclient.FileCopyCommand(RpcClient, wshrpc.CommandFileCopyData{SrcUri: srcPath, DestUri: destPath, Opts: &wshrpc.FileCopyOpts{Recursive: recursive, Merge: merge, Overwrite: force, Timeout: TimeoutYear}}, rpcOpts) - if err != nil { - return fmt.Errorf("copying file: %w", err) - } - return nil - }Also applies to: 375-404
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
cmd/wsh/cmd/wshcmd-file.go
(12 hunks)pkg/remote/fileshare/wshfs/wshfs.go
(1 hunks)
🧰 Additional context used
📓 Learnings (1)
pkg/remote/fileshare/wshfs/wshfs.go (1)
Learnt from: esimkowitz
PR: wavetermdev/waveterm#1790
File: pkg/remote/fileshare/wshfs/wshfs.go:122-122
Timestamp: 2025-01-22T01:28:41.417Z
Learning: The RpcClient in pkg/remote/fileshare/wshfs/wshfs.go is initialized and handled downstream by either main-server or wshcmd-connserver, as documented in the package comment.
🪛 golangci-lint (1.62.2)
cmd/wsh/cmd/wshcmd-file.go
97-97: type waveFileRef
is unused
(unused)
102-102: func resolveWaveFile
is unused
(unused)
166-166: var fileCpCmd
is unused
(unused)
529-529: SA4004: the surrounding loop is unconditionally terminated
(staticcheck)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
frontend/app/view/preview/preview.tsx (1)
Line range hint
602-611
: Improve error handling in handleFileSave.The current error handling only logs to console. Consider:
- Showing an error notification to the user
- Rolling back the UI state on failure
await RpcApi.FileWriteCommand(TabRpcClient, { info: { path: await this.formatRemoteUri(filePath), }, data64: stringToBase64(newFileContent), }); globalStore.set(this.fileContent, newFileContent); globalStore.set(this.newFileContent, null); - console.log("saved file", filePath); } catch (error) { + // Roll back UI state + globalStore.set(this.newFileContent, newFileContent); console.error("Error saving file:", error); + // Show error to user + globalStore.set(this.connectionError, `Failed to save file: ${error.message}`); }
🧹 Nitpick comments (1)
frontend/app/view/preview/preview.tsx (1)
376-376
: Remove debug console.log statements.These debug statements should be removed before merging to production.
- console.log("stat file", statFile); - console.log("full file", file); - console.log("parent file info", parentFileInfo);Also applies to: 397-397, 525-525
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
frontend/app/view/preview/directorypreview.tsx
(7 hunks)frontend/app/view/preview/preview.tsx
(7 hunks)pkg/wshrpc/wshremote/wshremote.go
(12 hunks)
🧰 Additional context used
🪛 golangci-lint (1.62.2)
pkg/wshrpc/wshremote/wshremote.go
237-237: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
(govet)
331-331: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
(govet)
262-262: SA4009: argument err is overwritten before first use
(staticcheck)
264-264: SA4009(related information): assignment to err
(staticcheck)
🔇 Additional comments (15)
frontend/app/view/preview/directorypreview.tsx (7)
8-9
: LGTM! Import changes align with RpcApi transition.The new imports support the transition from FileService to RpcApi for file operations.
Also applies to: 12-12
71-71
: LGTM! Improved error handling in getBestUnit.The added undefined check prevents potential runtime errors.
622-626
: Add error handling and user feedback for file deletion.The current implementation only logs errors to console. Consider showing error messages to users and handling specific error cases.
719-728
: Add error handling for file read operations.The FileReadCommand lacks error handling for potential network or permission issues.
818-826
: Add error handling for file creation.The FileCreateCommand lacks error handling. Consider handling specific error cases and providing user feedback.
839-843
: Add error handling for directory creation.The FileMkdirCommand lacks error handling. Consider handling specific error cases and providing user feedback.
734-740
: LGTM! Improved null safety in filtering logic.The added null check and fallback to empty array prevent potential runtime errors.
frontend/app/view/preview/preview.tsx (1)
780-783
: Ensure proper URI encoding in formatRemoteUri.The path parameter should be URL-encoded to handle special characters correctly.
pkg/wshrpc/wshremote/wshremote.go (7)
Line range hint
69-215
: LGTM! Well-structured file streaming implementation.The changes to include byte range support in the streaming functions are well-implemented with proper error handling and validation.
🧰 Tools
🪛 golangci-lint (1.62.2)
237-237: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
(govet)
331-331: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
(govet)
262-262: SA4009: argument err is overwritten before first use
(staticcheck)
264-264: SA4009(related information): assignment to err
(staticcheck)
440-505
: LGTM! Well-implemented directory listing with pagination.The implementation properly handles pagination with offset and limit parameters, and includes appropriate error handling.
Line range hint
658-699
: LGTM! Well-implemented file write with offset support.The implementation properly handles:
- File modes and permissions
- Base64 decoding of data
- Writing at specific offsets
- Error handling
362-365
: 🛠️ Refactor suggestionStrengthen path traversal protection.
The current path traversal check only looks for
..
but might miss other variants.Apply this diff to improve path traversal protection:
- if strings.Contains(next.Name, "..") { + cleanPath := filepath.Clean(next.Name) + if strings.Contains(cleanPath, "..") || strings.HasPrefix(cleanPath, "/") || strings.HasPrefix(cleanPath, "\\") { log.Printf("skipping file with unsafe path: %q\n", next.Name) continue }Likely invalid or redundant comment.
331-332
:⚠️ Potential issueFix context leak in
RemoteFileCopyCommand
.The cancel function from
context.WithTimeout
is discarded.Apply this diff to fix the context leak:
- readCtx, _ := context.WithTimeout(ctx, timeout) + readCtx, cancel := context.WithTimeout(ctx, timeout) + defer cancel() readCtx, cancel := context.WithCancelCause(readCtx)Likely invalid or redundant comment.
🧰 Tools
🪛 golangci-lint (1.62.2)
331-331: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
(govet)
237-238
:⚠️ Potential issueFix context leak in
RemoteTarStreamCommand
.The cancel function returned by
context.WithTimeout
is discarded, which can lead to context leak.Apply this diff to properly handle context cancellation:
- readerCtx, _ := context.WithTimeout(context.Background(), timeout) + readerCtx, cancel := context.WithTimeout(context.Background(), timeout) + defer cancel() rtn := iochan.ReaderChan(readerCtx, pipeReader, wshrpc.FileChunkSize, func() {Likely invalid or redundant comment.
🧰 Tools
🪛 golangci-lint (1.62.2)
237-237: lostcancel: the cancel function returned by context.WithTimeout should be called, not discarded, to avoid a context leak
(govet)
253-298
: 🛠️ Refactor suggestionImprove error handling in tar stream creation.
The tar stream creation lacks proper error handling for several operations:
- No error handling for
tarWriter.Close()
- No cleanup of resources in case of errors
Apply this diff to improve error handling:
+ var walkErr error defer func() { - tarWriter.Close() + if closeErr := tarWriter.Close(); closeErr != nil && walkErr == nil { + walkErr = fmt.Errorf("error closing tar writer: %w", closeErr) + } + if walkErr != nil { + rtn <- wshutil.RespErr[[]byte](walkErr) + } }() - err := filepath.Walk(path, func(file string, fi os.FileInfo, err error) error { + walkErr = filepath.Walk(path, func(file string, fi os.FileInfo, err error) error {Likely invalid or redundant comment.
🧰 Tools
🪛 golangci-lint (1.62.2)
262-262: SA4009: argument err is overwritten before first use
(staticcheck)
264-264: SA4009(related information): assignment to err
(staticcheck)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🔭 Outside diff range comments (1)
pkg/wshrpc/wshremote/wshremote.go (1)
Line range hint
712-762
: Add file size validation inRemoteWriteFileCommand
.The function should validate the total file size against a maximum limit to prevent disk space issues. Consider checking against
wshrpc.MaxFileSize
similar to other file operations in this package.Apply this diff to add size validation:
dataSize := base64.StdEncoding.DecodedLen(len(data.Data64)) + if offset+int64(dataSize) > wshrpc.MaxFileSize { + return fmt.Errorf("resulting file size %d exceeds maximum allowed size %d", offset+int64(dataSize), wshrpc.MaxFileSize) + } dataBytes := make([]byte, dataSize)
♻️ Duplicate comments (4)
pkg/remote/fileshare/s3fs/s3fs.go (1)
71-71
:⚠️ Potential issuePotential nil pointer dereference in debug log
In the debug log statement
log.Printf("bucket: %v", *bucket.Name)
, there is a potential nil pointer dereference ifbucket.Name
isnil
. Although you've added a nil check before appending toentries
, the debug log should also handle the case wherebucket.Name
might benil
to prevent a panic.Apply this diff to safely handle
nil
bucket names in the debug log:- log.Printf("bucket: %v", *bucket.Name) + if bucket.Name != nil { + log.Printf("bucket: %v", *bucket.Name) + } else { + log.Println("bucket has nil name") + }cmd/wsh/cmd/wshcmd-file.go (1)
551-559
:⚠️ Potential issueFix premature loop termination in
fileListRun
.The return statement inside the loop causes it to exit after processing only the first batch of files.
Apply this diff:
if onePerLine { for respUnion := range filesChan { if respUnion.Error != nil { log.Printf("error: %v", respUnion.Error) return respUnion.Error } for _, f := range respUnion.Response.FileInfo { fmt.Fprintln(os.Stdout, f.Name) } - return nil } + return nil }🧰 Tools
🪛 golangci-lint (1.62.2)
559-559: SA4004: the surrounding loop is unconditionally terminated
(staticcheck)
pkg/wshrpc/wshremote/wshremote.go (2)
237-238
:⚠️ Potential issueFix context leak in
RemoteTarStreamCommand
.The cancel function returned by
context.WithTimeout
is discarded, which can lead to context leak.Apply this diff to properly handle context cancellation:
- readerCtx, _ := context.WithTimeout(context.Background(), timeout) + readerCtx, cancel := context.WithTimeout(context.Background(), timeout) + defer cancel() rtn := iochan.ReaderChan(readerCtx, pipeReader, wshrpc.FileChunkSize, func() {
382-385
:⚠️ Potential issueStrengthen path traversal protection.
The current path traversal check only looks for
..
but might miss other variants. Consider using a more robust path sanitization approach.Apply this diff to improve path traversal protection:
- if strings.Contains(next.Name, "..") { + cleanPath := filepath.Clean(next.Name) + if strings.Contains(cleanPath, "..") || strings.HasPrefix(cleanPath, "/") || strings.HasPrefix(cleanPath, "\\") { log.Printf("skipping file with unsafe path: %q\n", next.Name) continue }
🧹 Nitpick comments (13)
pkg/remote/fileshare/wavefs/wavefs.go (3)
146-146
: Fix map value type declaration.The map value is declared as
any
but used asstruct{}
. For better type safety and memory efficiency, declare it asmap[string]struct{}
.-dirMap := make(map[string]any) // the value is max modtime +dirMap := make(map[string]struct{}) // using empty struct as value
268-293
: Remove commented-out code.This block of commented-out code should be removed as it's no longer needed and adds unnecessary noise to the codebase.
206-345
: Reduce code duplication in file writing operations.The
PutFile
andAppendFile
methods share significant amounts of code, particularly in file creation and error handling. Consider extracting common functionality into helper methods.Example helper method:
func (c WaveClient) prepareFile(ctx context.Context, conn *connparse.Connection, data wshrpc.FileData) (string, string, []byte, error) { dataBuf, err := base64.StdEncoding.DecodeString(data.Data64) if err != nil { return "", "", nil, fmt.Errorf("error decoding data64: %w", err) } zoneId := conn.Host if zoneId == "" { return "", "", nil, fmt.Errorf("zoneid not found in connection") } fileName, err := cleanPath(conn.Path) if err != nil { return "", "", nil, fmt.Errorf("error cleaning path: %w", err) } return zoneId, fileName, dataBuf, nil }pkg/remote/connparse/connparse.go (1)
30-54
: Add GoDoc comments to exported functions and methodsThe exported functions and methods in this package lack GoDoc comments. Adding comments that start with the function or method name enhances readability and helps with generating documentation.
For example:
// GetSchemeParts splits the scheme into its components. func (c *Connection) GetSchemeParts() []string { return strings.Split(c.Scheme, ":") } // ParseURI parses a connection URI and returns a Connection struct. func ParseURI(uri string) (*Connection, error) { // function implementation }Also applies to: 56-82, 84-143
pkg/remote/fileshare/fstype/fstype.go (2)
38-39
: Enhance documentation for theJoin
method.The current documentation "joins the given parts to the connection path" could be more specific about path safety and normalization.
Consider adding:
// Join safely combines the given parts with the connection path, ensuring proper // path normalization and preventing directory traversal attacks. Returns the // normalized path and any error that occurred during path validation.
33-35
: Clarify behavior for directory operations inMove
andCopy
.The documentation should specify how these methods handle directories (recursively or not) and their behavior with existing files/directories.
Consider adding:
// Move moves the file or directory from srcConn to destConn. For directories, // the operation is recursive. If the destination exists, the operation fails // unless opts.Overwrite is true. For directories, opts.Merge controls whether // to merge with existing directories. // Copy copies the file or directory from srcConn to destConn. For directories, // the operation is recursive. If the destination exists, the operation fails // unless opts.Overwrite is true. For directories, opts.Merge controls whether // to merge with existing directories.pkg/wshrpc/wshrpctypes.go (1)
382-396
: Clarify the comment about path separators in FileInfo.Dir.The comment states that separators will be normalized to "/", but it's not clear if this applies to Windows paths. Consider updating the comment to explicitly mention Windows path handling.
- Dir string `json:"dir,omitempty"` // returns the directory part of the path (if this is a a directory, it will be equal to Path). "~" will be expanded, and separators will be normalized to "/" + Dir string `json:"dir,omitempty"` // returns the directory part of the path (if this is a directory, it will be equal to Path). "~" will be expanded, and separators will be normalized to "/" (including on Windows)frontend/types/gotypes.d.ts (1)
393-405
: Add JSDoc comments to improve type documentation.Consider adding JSDoc comments to explain the purpose and usage of each field in the FileInfo type.
/** * Represents file information including metadata and permissions. */ type FileInfo = { /** The cleaned path (may have "~") */ path: string; /** The directory part of the path. "~" will be expanded, and separators will be normalized to "/" */ dir?: string; /** The file name without the directory path */ name?: string; // ... add comments for other fields };pkg/wshrpc/wshremote/wshremote.go (5)
666-671
: Improve error message consistency inRemoteFileMoveCommand
.Error messages inconsistently use URIs and cleaned paths, which could be confusing for debugging.
Apply this diff to make error messages consistent:
- return fmt.Errorf("destination %q already exists, use overwrite option", destUri) + return fmt.Errorf("destination %q already exists, use overwrite option", destPathCleaned) } else { err := os.Remove(destPathCleaned) if err != nil { - return fmt.Errorf("cannot remove file %q: %w", destUri, err) + return fmt.Errorf("cannot remove file %q: %w", destPathCleaned, err) }
207-207
: Improve logging security inRemoteStreamFileCommand
.Debug logging of file data length could expose sensitive information about file operations. Consider making this a debug-level log or removing it in production.
Apply this diff to improve logging:
- log.Printf("callback -- sending response %d\n", len(resp.Data64)) + if os.Getenv("DEBUG") != "" { + log.Printf("[debug] callback -- sending response of length %d\n", len(resp.Data64)) + }
717-720
: Consider platform-specific default file modes.The default file mode is hard-coded to 0644, which might not be appropriate for all platforms or use cases.
Consider defining platform-specific default modes or allowing configuration through options:
+ // DefaultFileMode is the default mode for new files + var DefaultFileMode fs.FileMode = 0644 + if runtime.GOOS == "windows" { + DefaultFileMode = 0666 + } createMode := data.Info.Mode if createMode == 0 { - createMode = 0644 + createMode = DefaultFileMode }
346-457
: Remove or properly document commented code.Large blocks of commented code make the codebase harder to maintain. Consider moving this to a separate branch or documenting it better with a clear timeline for implementation.
Either remove the commented code or improve the documentation:
- /* TODO: uncomment once ready for cross-connection copy + /* TODO(esimkowitz): Implement cross-connection copy support + * This code will be uncommented when the following requirements are met: + * 1. S3 support is implemented + * 2. Cross-remote copy feature is tested + * Expected timeline: Q2 2025 */
486-488
: Improve error handling in directory walk.Errors during directory walk are silently passed through without logging, which could make debugging issues difficult.
Apply this diff to improve error handling:
if err != nil { + log.Printf("error walking directory at %q: %v\n", path, err) return err }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
cmd/wsh/cmd/wshcmd-file.go
(12 hunks)frontend/app/store/wshclientapi.ts
(7 hunks)frontend/app/view/preview/directorypreview.tsx
(8 hunks)frontend/types/gotypes.d.ts
(8 hunks)pkg/blockcontroller/blockcontroller.go
(3 hunks)pkg/filestore/blockstore.go
(7 hunks)pkg/filestore/blockstore_test.go
(14 hunks)pkg/remote/connparse/connparse.go
(1 hunks)pkg/remote/fileshare/fileshare.go
(1 hunks)pkg/remote/fileshare/fstype/fstype.go
(1 hunks)pkg/remote/fileshare/s3fs/s3fs.go
(1 hunks)pkg/remote/fileshare/wavefs/wavefs.go
(1 hunks)pkg/remote/fileshare/wshfs/wshfs.go
(1 hunks)pkg/service/blockservice/blockservice.go
(2 hunks)pkg/wcore/block.go
(1 hunks)pkg/wshrpc/wshclient/wshclient.go
(8 hunks)pkg/wshrpc/wshremote/wshremote.go
(13 hunks)pkg/wshrpc/wshrpctypes.go
(10 hunks)pkg/wshrpc/wshserver/wshserver.go
(9 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- pkg/wcore/block.go
- pkg/filestore/blockstore_test.go
- pkg/service/blockservice/blockservice.go
- pkg/filestore/blockstore.go
- pkg/remote/fileshare/fileshare.go
- pkg/wshrpc/wshclient/wshclient.go
🧰 Additional context used
📓 Learnings (1)
pkg/remote/fileshare/wshfs/wshfs.go (1)
Learnt from: esimkowitz
PR: wavetermdev/waveterm#1790
File: pkg/remote/fileshare/wshfs/wshfs.go:122-122
Timestamp: 2025-01-22T01:28:41.417Z
Learning: The RpcClient in pkg/remote/fileshare/wshfs/wshfs.go is initialized and handled downstream by either main-server or wshcmd-connserver, as documented in the package comment.
🪛 golangci-lint (1.62.2)
cmd/wsh/cmd/wshcmd-file.go
167-167: var fileMvCmd
is unused
(unused)
408-408: func fileMvRun
is unused
(unused)
559-559: SA4004: the surrounding loop is unconditionally terminated
(staticcheck)
🔇 Additional comments (24)
pkg/blockcontroller/blockcontroller.go (3)
Line range hint
779-783
: LGTM! Change is consistent with previous usage.The change maintains consistency with the previous connection keywords type change.
417-420
: LGTM! Verify connection keywords usage.The change from
wshrpc.ConnKeywords
towconfig.ConnKeywords
is consistent with moving connection-related types to a dedicated config package.Let's verify the usage of ConnKeywords:
✅ Verification successful
Migration to wconfig.ConnKeywords is complete and consistent
The change is properly implemented across the entire codebase with no remaining references to the old path. The type is correctly defined in the config package and consistently used by all dependent components.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check the usage of ConnKeywords across the codebase # Search for ConnKeywords usage echo "Searching for ConnKeywords usage:" rg "ConnKeywords"Length of output: 3778
Script:
#!/bin/bash # Search for any remaining references to the old path and check imports echo "Searching for wshrpc.ConnKeywords references:" rg "wshrpc\.ConnKeywords" echo -e "\nChecking imports related to ConnKeywords:" rg "import.*ConnKeywords"Length of output: 255
324-326
: LGTM! Verify type compatibility between FileOpts structs.The change from
filestore.FileOptsType
towshrpc.FileOpts
looks good and aligns with the PR objectives.Let's verify the type compatibility:
✅ Verification successful
Type compatibility verified - change is safe ✅
The change from
filestore.FileOptsType
towshrpc.FileOpts
is verified to be type-compatible, and the usage is isolated to this specific call. The new type has all the required fields with matching types.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check the field compatibility between FileOptsType and FileOpts # Search for the type definitions echo "Searching for FileOptsType definition:" rg -A 5 "type FileOptsType struct" echo "Searching for FileOpts definition:" rg -A 5 "type FileOpts struct"Length of output: 648
Script:
#!/bin/bash # Search for any remaining references to FileOptsType echo "Searching for FileOptsType references:" rg "FileOptsType" # Look for the old definition in git history echo -e "\nSearching git history for FileOptsType:" git log -p | rg -A 5 "type FileOptsType struct"Length of output: 199
Script:
#!/bin/bash # Search for FileOpts usage with MaxSize and Circular fields echo "Searching for FileOpts usage:" rg -B 2 -A 2 "FileOpts\{.*MaxSize.*Circular" # Check for any potential issues in other files echo -e "\nSearching for MakeFile calls:" ast-grep --pattern 'MakeFile($$$)'Length of output: 759
pkg/remote/fileshare/wavefs/wavefs.go (6)
1-23
: LGTM! Well-structured package with appropriate imports.The package structure follows Go best practices with proper copyright notice and necessary imports.
25-31
: LGTM! Clean interface implementation.The empty struct pattern with interface compliance verification is a good Go practice.
187-204
: LGTM! Clean implementation of Stat.The error handling is consistent and appropriate.
360-383
: Error Handling inDelete
Method Should Be ImprovedThe event is published after deleting the file but does not consider whether the deletion was actually successful before publishing the event.
385-419
: LGTM! Robust path handling implementation.The path handling is thorough and handles edge cases appropriately, including proper cleaning of paths and handling of special prefixes.
84-90
: 🛠️ Refactor suggestionImprove error handling for non-existent files.
When a file doesn't exist, the code falls through to list entries, which could be misleading as it treats non-existent files as directories. Consider explicitly handling
fs.ErrNotExist
similar to theReadAt
case.} else if !errors.Is(err, fs.ErrNotExist) { return nil, fmt.Errorf("error reading blockfile: %w", err) +} else { + return nil, fmt.Errorf("NOTFOUND: %w", err) } -} -list, err := c.ListEntries(ctx, conn, nil)Likely invalid or redundant comment.
frontend/app/store/wshclientapi.ts (2)
151-153
: Updates to method signatures enhance consistencyThe changes to method parameter types from various specific command data types to the general
FileData
type improve consistency across file-related commands. This refactoring simplifies the API and enhances maintainability.Also applies to: 166-168, 171-173, 176-178, 181-183, 201-203, 321-323
185-188
: Addition of stream command methods aligns with asynchronous operationsThe introduction of asynchronous generator methods for streaming data, such as
FileListStreamCommand
,FileStreamTarCommand
,RemoteListEntriesCommand
,RemoteStreamFileCommand
, andRemoteTarStreamCommand
, appropriately leverages TypeScript'sAsyncGenerator
to handle streaming operations.Also applies to: 205-208, 295-298, 311-314, 315-318
pkg/remote/fileshare/fstype/fstype.go (1)
13-42
: Well-designed interface with comprehensive file operations!The
FileShareClient
interface provides a robust set of methods for file operations with proper context handling and streaming support.pkg/remote/fileshare/wshfs/wshfs.go (3)
Line range hint
237-238
: Fix context leak inRemoteTarStreamCommand
.The cancel function from
context.WithTimeout
is discarded, which can lead to context leak.Apply this diff:
- readerCtx, _ := context.WithTimeout(context.Background(), timeout) + readerCtx, cancel := context.WithTimeout(context.Background(), timeout) + defer cancel()
Line range hint
382-385
: Strengthen path traversal protection in the commented code.The current path traversal check only looks for
..
but might miss other variants.Apply this diff:
- if strings.Contains(next.Name, "..") { + cleanPath := filepath.Clean(next.Name) + if strings.Contains(cleanPath, "..") || strings.HasPrefix(cleanPath, "/") || strings.HasPrefix(cleanPath, "\\") { log.Printf("skipping file with unsafe path: %q\n", next.Name) continue }
Line range hint
346-457
: Track the commented-out cross-connection copy code.The commented-out code for cross-connection copy should be tracked to ensure it's not forgotten.
Add a TODO comment with a tracking issue:
- /* TODO: uncomment once ready for cross-connection copy + /* TODO(#XXXX): Implement cross-connection copy support + * This code is temporarily commented out as part of separating S3 and + * cross-remote copy features. Track progress in issue #XXXX.Would you like me to create a new issue to track this implementation?
cmd/wsh/cmd/wshcmd-file.go (1)
35-35
:⚠️ Potential issueUse
int64
forTimeoutYear
constant.The
TimeoutYear
constant should be declared asint64
to prevent potential overflow on 32-bit systems.Apply this diff:
- TimeoutYear = int64(365) * 24 * 60 * 60 * 1000 + TimeoutYear = int64(365 * 24 * 60 * 60 * 1000)Likely invalid or redundant comment.
pkg/wshrpc/wshserver/wshserver.go (2)
283-289
: Clean refactoring to use the new fileshare package!The file operations have been cleanly refactored to use the new
fileshare
package, with consistent error handling and proper command implementations.Also applies to: 292-333
456-459
: Well-implemented configuration management!The
GetFullConfigCommand
is a clean addition that properly retrieves the full configuration from the watcher.pkg/wshrpc/wshrpctypes.go (4)
22-31
: LGTM! Well-defined constants with appropriate values.The file size limits and chunk sizes are well-documented and use reasonable values. The chunk sizes being powers of 2 is good for performance.
Line range hint
59-86
: LGTM! Comprehensive set of file operation commands.The new command types follow a consistent naming pattern and cover essential file operations, with a clear separation between local and remote operations.
370-376
: LGTM! Well-designed FileData structure.The FileData type is flexible and can handle various file operations efficiently by combining file info, content, and directory entries in a single structure.
151-163
: LGTM! Consistent interface methods for file operations.The methods follow a consistent pattern, use appropriate return types, and handle errors consistently. The use of FileData type across all methods provides a unified approach to file operations.
frontend/types/gotypes.d.ts (1)
370-382
: LGTM! Well-defined TypeScript types matching Go counterparts.The FileData and FileDataAt types are correctly defined with optional fields and maintain consistency with the backend.
frontend/app/view/preview/directorypreview.tsx (1)
628-632
: 🛠️ Refactor suggestionImprove error handling for FileDeleteCommand.
Add proper error handling and user feedback for file deletion.
fireAndForget(async () => { + try { await RpcApi.FileDeleteCommand(TabRpcClient, { info: { path: await model.formatRemoteUri(finfo.path), }, - }).catch((e) => console.log(e)); + }); setRefreshVersion((current) => current + 1); + } catch (e) { + console.error(e); + model.showErrorMessage(`Failed to delete file: ${e.message}`); + } });Likely invalid or redundant comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
cmd/wsh/cmd/wshcmd-file.go (2)
35-35
: Consider using time.Duration for TimeoutYear constant.For better type safety and readability, consider using Go's
time.Duration
:-TimeoutYear = int64(365) * 24 * 60 * 60 * 1000 +TimeoutYear = int64(365 * 24 * time.Hour / time.Millisecond)
402-404
: Consider using a more reasonable timeout for file operations.Using a one-year timeout for file operations could lead to resource exhaustion if many operations are running simultaneously. Consider implementing a more reasonable default timeout (e.g., 1 hour) with the ability to extend it for large files.
- rpcOpts := &wshrpc.RpcOpts{Timeout: TimeoutYear} + const defaultTimeout = int64(time.Hour / time.Millisecond) + timeout := defaultTimeout + if info.Size > 1024*1024*1024 { // 1GB + timeout = TimeoutYear + } + rpcOpts := &wshrpc.RpcOpts{Timeout: timeout}Also applies to: 430-432
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cmd/wsh/cmd/wshcmd-file.go
(12 hunks)
🧰 Additional context used
🪛 golangci-lint (1.62.2)
cmd/wsh/cmd/wshcmd-file.go
562-562: SA4004: the surrounding loop is unconditionally terminated
(staticcheck)
🔇 Additional comments (4)
cmd/wsh/cmd/wshcmd-file.go (4)
37-64
: LGTM! Well-structured URI documentation.The URI help text is comprehensive and well-organized, clearly explaining the supported schemes and their usage patterns.
69-70
: LGTM! Clear and accurate command description.The updated description better reflects the command's capabilities to manage files across different storage systems.
90-96
: LGTM! Well-structured command flags.The flags for copy and move commands follow Unix conventions and provide essential functionality.
554-562
:⚠️ Potential issueFix premature loop termination in
fileListRun
.The return statement inside the loop causes it to exit after processing only the first batch of files.
Apply this diff:
if onePerLine { for respUnion := range filesChan { if respUnion.Error != nil { log.Printf("error: %v", respUnion.Error) return respUnion.Error } for _, f := range respUnion.Response.FileInfo { fmt.Fprintln(os.Stdout, f.Name) } - return nil } + return nil }Likely invalid or redundant comment.
🧰 Tools
🪛 golangci-lint (1.62.2)
562-562: SA4004: the surrounding loop is unconditionally terminated
(staticcheck)
wsh file
overhaul without copy and S3wsh file
overhaul without cross-remote copy and S3
The S3 support and the cross-remote copy support are taking longer, so this PR breaks out the rest of the file changes so we can merge them first.