フレクトのクラウドblog re:newal


WebRTC(Amazon Chime SDK JS)で超解像を使ってデータ通信量を削減してみる


前回の記事では、Amazon Chime SDK JSを用いたビデオ会議システムやゲーム配信システム(Among us auto mute)において、複数ユーザのビデオ映像を単一の映像に合成することでデータ通信量を削減する方法をご紹介しました。 今回は、データ通信量を削減する別の方法として、ビデオ映像の送信時には解像度を落として受信後に超解像技術を用いる方法を試みてみましたので、これを紹介したいと思います。

下図が実際にやったときの画像です。一番右側の画像が超解像を使用した結果です。赤線左右で左のほうが高解像度化されています。 cap_画面収録 2021-09-09 13 36 55_00:00:02_03

画像だと潰れてしまってわかりにくいと思うので、次のリンクから動画をダウンロードして確認してみてください。 download


Anypoint PlatformのDesign Centerを使ってみよう(1)



今回は、Anypoint Platformの機能の1つ「Design Center」の使い方を紹介します。


Design CenterでAPI仕様を作成してみよう。



  • RESTful API Modeling Language (RAML) バージョン 0.8 または 1.0
  • OpenAPI specification (OAS) バージョン 2.0 または 3.0
  1. Anypoint Platformで「Design Center」をクリックします。
  2. Design Centerで「Create new」> 「New API Spec」をクリックします。
  3. New API Spec画面で、API仕様の作成方法を選択します。
    ・”I'm comfortable designing it on my own”を選択すると、フリーフォーマットでのAPI仕様作成が可能です。
    ・ ”Guide me through it”を選択すると、ガイドに従ったAPI仕様作成が可能です。 今回は、”Guide me through it”を選択したときを説明します。
  4. ガイドに従い、入力します。
    作成手順については、以下も参照してください。 docs.mulesoft.com

次回は、引き続きDesign Centerの使用方法を紹介したいと思います。


Amazon Chime SDKアプリで映像データ転送量を削減するアイデア -Among Us Auto Mute-


前回の記事では、Amazon Chime SDK for javascript(Amazon Chime SDK JS)を用いて、"Among usをAuto Muteする機能"と"ゲーム画面を配信する機能"を持つゲーム実況システムを構築する方法についてご紹介しました。




Screenshot from 2021-08-16 05-42-49

demo in youtube


MuleSoft:Anypoint PlatformのAnypoint Visualizerで何ができるの??



今回は、以前紹介したAnypoint Platformの機能の1つ「Anypoint Visualizer」について、紹介したいと思います。

Anypoint Visualizerでできること
  • アーキテクチャ視覚化

    • Muleアプリケーションネットワークのトポロジ(システムやMuleアプリケーションなどをノードで表示)を確認できます。
      Anypoint Visualizer:アーキテクチャ視覚化

  • トラブルシューティング視覚化

    • Muleアプリケーションごとのメトリックスが視覚的に確認できます。
      Anypoint Visualizer:トラブルシューティング視覚化
      ※例として、Muleアプリケーションごとのメトリックス:Avg memory utilizationを視覚化しています。

  • ポリシー視覚化

    • Muleアプリケーションごとに設定されたポリシーが視覚的に確認できます。
      Anypoint Visualizer:ポリシー視覚化
      ※例として、Muleアプリケーションごとのポリシー:Rate Limiting, Spike Controlの適用状況を視覚化しています。

次回は、Design Centerの使用方法を紹介したいと思います。


Demonstrating the effect of text pre-processing using fastText classification for the Japanese language

This is Shuochen Wang from Flect research lab. Today I am going to focus on the NLP task for the Japanese language. Specifically I am going to demonstrate:

  1. How to use fastText for text classification

  2. The effect of text preprocessing and for text classification.

Table of contents

  • Introduction
    • What is text classification?
    • What is fastText classification?
    • Corpus material
  • Pre-processing
    • Grouping the data
    • Remove hyperlink
    • Word Segmentation
    • Append labels to the file
    • Removing stopwords and numericals
  • Training the model
  • Conclusion
  • Future work
  • Appendix


Natural Language Processing (NLP) allows machines to break down and interpret human language. It is used in a variety of tasks, including text classification, spam filter, machine translation, search engine and chatbots.  

What is text classification?

To put it simply, text classification is type of NLP task that classifies a set of predefined categories to any text. It is a type of supervised learning.


AR Remote Instructions base on ARKit Part 2

研究開発室の馮 志聖(マイク)です。 Following the previous article: AR Remote Instructions base on ARKit


This time we will further discuss and explain in detail how to share 3D virtual objects.


  • Content
  • Introduction
  • 3D objects on AR Remote Instructions
    • Purpose
    • System Overview
    • Issue
    • Solution
    • Demo
    • Evaluation
  • Conclusion
  • Future work
  • Other
  • Reference
    • ARKit
    • ARPaint
    • 3d Scanner App™


The development of science and technology and the popularization of the Internet have brought many conveniences to people, especially the distance is no longer a barrier between each other. Due to the epidemic, the popularity of remote work has been accelerated. Video chat tools are often used in remote meetings, but the scope of applications is still limited, such as:

Case 1: In the factory. In the case of machine equipment failure, a maintenance person cannot arrive immediately because of remote work. When they try to remote maintenance use video chat on guide maintenance. They are impossible to grasp the damage of the equipment in detail, which will increase time and maintenance costs. If they use the AR remote instructions application to scan the target machine, it will have a 3d object with a good structure of the machine. Also sharing this 3d object with the maintenance person. The maintenance person can share the drawing of the mark on 3d object. It will improve this problem.

Case 2: In the field of interior design. The client wants to know whether the designer's work is suitable for his environment. In the absence of any environmental data file where the client is located, the designer only uses video chat, unable to observe the differences and details in detail. If they use the AR remote instructions application to scan the target area(like room or kitchen, etc...) or product(like chair or table, etc...), it will have a 3d object with a good structure and material of the target area or product. Also sharing this 3d object with the designer or client. The designer can add the product to 3d object of the target area, and design its style. The client can add the product to 3d object of the target area, and know whether the designer's work is suitable for his environment. It will improve this problem.

At present, only two cases are cited. Generally speaking, there may be more other cases. Based on various restrictions, we hope to develop software that can break through these restrictions.


Remote monitoring system for operating and monitoring robots based on simple user interface

研究開発室の馮 志聖(マイク)です。


  • Introduction
  • Remote control on ROS(Robot Operating System)
  • Conclusion
  • Future work
  • Other
  • Reference


With the rapid development of science and technology and the popularization of the Internet, people are pursuing a better quality of life, and relaxed and happy work has become the first choice. At the same time, many labor-intensive and overtime jobs have gradually appeared in the labor gap. To make up for the labor gap, it is an inevitable trend for robots to replace humans. In our daily life, we often benefit from services provided by robots, such as sweeping robots, restaurant front desk services, home delivery services, etc., Many products we use are also made by robots in factories. To facilitate production, robots have gradually become indispensable.

In recent years, due to the spread of the epidemic and for security reasons, most people choose to work at home. Some software and tools based on remote collaboration have gradually gained attention. Among them, video chat services, which can increase the chances of discussions, especially have grown dramatically. The fluency of meetings allows members to quickly reach a consensus. For this reason, remote work is becoming no longer a barrier, but remote collaboration also has many limitations. Many researchers and developers have discussed how to break through these limitations.

What is the correlation between the above two? For example, in the following two situations, remotely controlling the robot would be a better choice. First, there are some areas in the world that are hazardous to humans or living things. The remote control can allow robots to perform tasks in these dangerous areas. Second, some emergency tasks require the assistance of senior personnel from other countries. Based on the epidemic, time, and distance considerations, remote control robots can be used to perform tasks.

In this blog, we will discuss a fast and effective way to establish a simple interface operation and monitoring robot system, and verify the feasibility of remote execution of tasks.