Basic template for working on the coding challenge.

This commit is contained in:
Leonora Tindall 2021-11-11 11:46:40 -06:00
commit 278043b16a
Signed by: nora
GPG Key ID: 7A8B52EC67E09AAF
5 changed files with 184 additions and 0 deletions

1
.gitignore vendored Normal file
View File

@ -0,0 +1 @@
/target

7
Cargo.lock generated Normal file
View File

@ -0,0 +1,7 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 3
[[package]]
name = "code-challenge"
version = "0.1.0"

8
Cargo.toml Normal file
View File

@ -0,0 +1,8 @@
[package]
name = "code-challenge"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]

49
README.md Normal file
View File

@ -0,0 +1,49 @@
# Rust Tokens Coding Challenge
This coding challenge presents a tokenization/parsing scenario in which some
external system is accepting bytes in discrete packets representing a stream.
You will solve the challenge by modifying the method `process` of the struct `Parser`, without changing its function signature or the signature of the constructor, `new`. You can, however, modify the struct `Parser` itself however you feel is necessary.
## Packetization
These packets aren't consistent in their size or contents, so the stream
`"I am a stream of packets; synthesize me."` could just as easily be
`"I am ", "a stream ", "of packets; ", "synthesize me."`,
one packet per character (`"I", " ", "a", "m", ...`), or just one packet
containing the entire input.
## Input and Output
Your task is to implement a struct, called `Parser`, whose `process` method
adds a suffix after certain words. So, for instance, if given the token `"foo"` and the suffix `"bar"`, your `Parser` would take the following input:
- `"Does this foo look like a fooing bar to you?"`
and return this output:
- `"Does this foobar look like a foobaring bar to you?"`
This needs to work over the full range of possible packetizations,
from one packet for the whole input to one packet per character.
## Mechanics of I/O
The `process` function takes a mutable reference to the `Parser` (so you can
add some state into the `Parser` if you want),
a chunk of bytes to process,
and a `&mut dyn Write` sink for the output.
If you're not familiar with the `Write` trait, it's worth looking into.
The gist, however, is that you can send a buffer (like a `&[u8]`) to the
`Write`r with its `write_all` method.
This is actually what the default implementation of `process` does, as provided by the challenge, which is of course not a correct solution - but it does pass half of the tests!
## Non-functional Requirements
There are no constraints on speed, memory usage, or binary size. Go wild.
# This Repo
This Git repo has just two commits, on the branches `challenge` (and `main`) and `solution`.
Don't look at `solution` until you've solved the challenge!

119
src/lib.rs Normal file
View File

@ -0,0 +1,119 @@
use std::io::Write;
/// The parser/transformator. Given a `token` and a `suffix`, adds the `suffix`
/// after the `token` to any byte buffer given to [`Parser::process`].
///
/// For example, with a `Parser::new(b"foo", b"bar"), calling
/// `parser.process(b"is foo a bar?", &mut writer)` will result in
/// `"is foobar a bar?"` being written to `writer`.
//
// NOTE: Feel free to modify this struct as needed.
pub struct Parser {
token: &'static [u8],
suffix: &'static [u8],
}
impl Parser {
/// Create a new `Parser` which will append `suffix` after `token` whenever
/// it appears in input passed to `process`.
//
// NOTE: This function signature should stay the same.
pub fn new(token: &'static [u8], suffix: &'static [u8]) -> Self {
Self { token, suffix }
}
/// Write the bytes given in `input` to `output`, plus the bytes in `suffix`
/// immediately after `token`, including across call boundaries.
///
/// # Examples
///
/// If the token is present in the input, the writer gets the input plus
/// the suffix right after the token. For instance, here,
/// `does this foo go bar?` is transformed into
/// `does this foobar go bar?`.
///
/// ```rust
/// # use code_challenge::Parser;
/// let mut parser = Parser::new(b"foo", b"bar");
/// let mut buffer = Vec::new();
/// parser.process(b"does this foo go bar?", &mut buffer).unwrap();
/// assert_eq!(b"does this foobar go bar?", buffer.as_slice());
/// ```
///
/// This works even if the token is split across multiple calls to the
/// `process` method on the same instance of `Parser`.
/// For instance, this is exactly the same as the previous example,
/// but splits the input across multiple calls to `parser`.
///
/// ```rust
/// # use code_challenge::Parser;
/// let mut parser = Parser::new(b"foo", b"bar");
/// let mut buffer = Vec::new();
/// parser.process(b"does this f", &mut buffer).unwrap();
/// parser.process(b"oo go bar?", &mut buffer).unwrap();
/// assert_eq!(b"does this foobar go bar?", buffer.as_slice());
/// ```
//
// NOTE: This function signature should stay the same.
pub fn process(&mut self, input: &[u8], output: &mut dyn Write) -> Result<(), std::io::Error> {
output.write_all(input)
}
}
#[test]
fn test_output_unmodified() {
let mut parser = Parser::new(b"lalala", b"");
let mut buffer = Vec::new();
parser
.process(b"does not contain the token", &mut buffer)
.expect("couldn't write to buffer");
assert_eq!(
"does not contain the token",
String::from_utf8_lossy(&buffer)
)
}
#[test]
fn test_output_onetoken() {
let mut parser = Parser::new(b"token", b"xxx");
let mut buffer = Vec::new();
parser
.process(b"does contain the token", &mut buffer)
.expect("couldn't write to buffer");
assert_eq!(
"does contain the tokenxxx",
String::from_utf8_lossy(&buffer)
)
}
#[test]
fn test_output_onetoken_multi() {
let mut parser = Parser::new(b"token", b"xxx");
let mut buffer = Vec::new();
parser
.process(b"does contain ", &mut buffer)
.expect("couldn't write to buffer");
parser
.process(b"the tok", &mut buffer)
.expect("couldn't write to buffer");
parser
.process(b"en", &mut buffer)
.expect("couldn't write to buffer");
assert_eq!(
"does contain the tokenxxx",
String::from_utf8_lossy(&buffer)
)
}
#[test]
fn test_output_splittoken() {
let mut parser = Parser::new(b"token", b"xxx");
let mut buffer = Vec::new();
parser
.process(b"doesn't contain the tok en", &mut buffer)
.expect("couldn't write to buffer");
assert_eq!(
"doesn't contain the tok en",
String::from_utf8_lossy(&buffer)
)
}