1
0
mirror of https://github.com/ndarilek/tts-rs.git synced 2024-11-01 08:29:37 +00:00
Commit Graph

443 Commits

Author SHA1 Message Date
2f85c3b2bf Add iOS build. 2020-08-18 15:16:30 -05:00
3b3be830c6 Update iOS build targets. 2020-08-18 14:59:48 -05:00
65eeddc1ad Remove missing targets. 2020-08-18 14:24:00 -05:00
27e9aaf034 Add iOS build. 2020-08-18 14:17:06 -05:00
abe5292868 Bump version. 2020-08-13 11:15:52 -05:00
cce1569c72 Sync supported synths in README and lib.rs. 2020-08-13 11:15:23 -05:00
4d980270be Merge branch 'master' of https://github.com/ndarilek/tts-rs 2020-08-13 11:12:25 -05:00
d199a6e8ee Update supported synthesizers. 2020-08-13 11:12:15 -05:00
ff877acd87 Eliminate warning in non-MacOS builds. 2020-08-13 11:11:38 -05:00
c5b1ff1944 Add AVFoundation backend, used automatically on MacOS 10.14 and above. 2020-08-13 11:08:00 -05:00
2d0ab8889a Eliminate a warning. 2020-08-13 06:58:16 -05:00
cc2a4c12f7 Rename ns_speech_synthesizer backend to appkit. 2020-08-13 06:46:16 -05:00
1d7018a558 Build MacOS releases and explicitly specify task dependencies. 2020-08-12 15:56:10 -05:00
d95eed63c5 Add MacOS CI test builds. 2020-08-12 15:48:38 -05:00
af678d76d1 Update documentation with supported backends. 2020-08-12 15:45:16 -05:00
75fd320d3f Implement rate/volume-setting for NSSpeechSynthesizer, along with other tweaks.
Unfortunately, there seems to be a difference in how the `hello_world` example processes rate and volume changes. I'm not sure if it doesn't adjust rate for samples while speaking. In any case, arguably there are just going to be differences in platforms that I can't account for, so this may just have to be. Hopefully it doesn't interfere with actual usage.
2020-08-12 15:41:57 -05:00
dc1c00f446 Good news: NSSpeechSynthesizer speech now queues. Bad news: my brain bleeds. 2020-08-12 15:14:17 -05:00
7eccb9f573 Clean up println! and comparison calls. 2020-08-12 09:54:25 -05:00
427ca027be Add Drop implementation. 2020-08-12 09:52:16 -05:00
47bfe768e6 Get delegates working so speech interruption/queuing should now be possible.
* Fix broken delegate method signature.
* Add `NSRunLoop` into `hello_world` example so delegates are called. Presumably, MacOS apps already run one of these, but the example didn't.
2020-08-12 09:49:51 -05:00
faadc0e3b7 Still doesn't work, but at least it doesn't segfault now. 2020-08-11 14:44:52 -05:00
753f6c5ecd WIP: Initial support for MacOS/NSSpeechSynthesizer.
* Add necessary dependencies, build script, and `NSSpeechSynthesizer` backend.
* Get very basic speech working.

Needs a delegate to handle queued speech, and currently segfaults if one is set.
2020-08-11 12:11:19 -05:00
73786534dc Bump version. 2020-07-07 09:09:18 -05:00
e1bb6741a9 Correctly indicate that WinRT supports detection of speaking. 2020-07-07 09:08:44 -05:00
742daf332b Ensure wasm32-unknown-unknown target builds when releasing as well. 2020-07-06 13:35:17 -05:00
770bdd3842 Add necessary target. 2020-07-06 13:13:17 -05:00
61edbce301 Make sure we can build wasm32-unknown-unknown target. 2020-07-06 13:00:17 -05:00
7ae3faac63 Bump version. 2020-07-06 12:52:39 -05:00
16a6f6378a Under WinRT, recreate player completely when interruption is requested. 2020-07-06 12:52:18 -05:00
1d7c668a4a Sanity-check value to prevent overflow. 2020-07-06 12:14:50 -05:00
eb936a4ae0 Bump version. 2020-06-17 19:00:57 -05:00
d830f44c55 Handle corner case where WinRT speech that doesn't interrupt, and is played after a delay, causes recently-spoken utterances to replay.
`MediaPlayer` only seems to have states for playing and paused, but not stopped. Further, playing when the queue is finished seems to restart playback from the beginning.

Here we clear the list of items to play if the player is paused and we're on the last item. We assume we're done with all items to speak, and clear the list before appending a new item and beginning playback again.

The correct solution is probably to investigate how events work in winrt-rs, but callbacks and Rust have always been a disaster when I've tried them, so I'm hesitant. This does seem to handle the basic scenarios I've thrown at it.
2020-06-17 18:54:34 -05:00
a6146a7f3e Install LLVM. 2020-06-17 18:01:09 -05:00
c2e3a41b2b Publish winrt_bindings from a Windows build server. 2020-06-17 18:00:14 -05:00
251128f917 Don't verify on publish since this crate requires Windows. 2020-06-17 17:55:35 -05:00
027dcd1b7c Set license. 2020-06-17 17:54:23 -05:00
f642d86f73 Tweak workflow to publish winrt_bindings package first. 2020-06-17 17:27:28 -05:00
ce8c5f5289 Refactor to use separate tts_winrt_bindings crate, and bump version. 2020-06-17 17:25:43 -05:00
8fe6a209ae Rename crate since I can't publish with a path dependency. 2020-06-17 17:25:01 -05:00
439bd53f13 Bump version. 2020-06-17 16:49:09 -05:00
45c7b1afc7 Various WinRT refinements.
* Move autogenerated code to subcrate to speed up compilation.
* `is_speaking` also checks whether a source is opening, in addition to whether it is playing.
* Return to using autoplay.
2020-06-17 16:46:42 -05:00
843bf876c1 Remove GitLab CI configuration. 2020-06-17 12:25:40 -05:00
6d88533715 Bump version. 2020-06-14 20:03:34 -05:00
10a9d56ae5 Remove autoplay setting. 2020-06-14 20:03:11 -05:00
69c5581799 Bump version. 2020-06-14 19:43:08 -05:00
933e850919 Ensure that MediaPlayer for speech is playing. 2020-06-14 19:42:48 -05:00
2f19d663dc Bump version. 2020-06-14 18:56:40 -05:00
1526602ad8 Don't close MediaPlayer when stopping speech, and actually support interruption. 2020-06-14 18:56:01 -05:00
2750ce4f99 Bump version. 2020-06-11 13:04:05 -05:00
4f011e6895 Get Tolk working again.
Two Tolk instances were being created. One checked for the presence of a screen reader. The other actually performed the speech, and was returned as part of the `TTS` instance.

Unfortunately, Tolk doesn't seem to appreciate being called twice. So here we check if a screen reader is detected and, if one is, return the instance that did the detection. Otherwise, error out and return the WinRT backend.
2020-06-11 13:00:24 -05:00