Scam or legit? Uranus
Chinese team. Very questionable project. Another "Amazon Cloud killer." Team has been in crypto for 6 years (right, we're sure they met up with Nakamoto for coffee daily). No token and no ICO. Questionable! (almost Scam)
Overview
Concept
Chinese. Sketchy as fuck. Using computing power of everybody's devices to take on Amazon Cloud. No idea how. Use generic, broad, and very questionable statements without specifics about how they are different. Claim team experience that's almost longer than the crypto community (6 years in blockchain. Do they even have a token or ICO? The very definition of shitcoin.
Actual Product
Cloud computing is too expensive, unsafe, and slow. They will somehow solve all of this. Apparently by creating a marketplace for people's devices' unused computing power.
Site
Visuals: Another space-themed template, and not the best execution either. It's in Korean and English.
Uniqueness: The home screen isn't that great for navigating the universe. Everything else is standard: milestones, team, etc.
Content: Plenty of complex technical text in pretty bad formatting. No registration forms of any kind. Just an email address .Site generally focuses on their mission, plans, sizable team, a little about the technical parameters of what they're trying to do, and, of course, about the huge market potential.
Social Media
Do have social media links (most are even clickable)
- Reddit - brand new account, two weeks old. Absolutely clean of posts (or visitors, for that matter)
- Telegram - almost 6000 members. No activity except for welcome messages from admin. But number of members growing rapidly
- Twitter - fresh, just 128 followers and 4 tweets
- Facebook - same, just starting out
- instagram - 6 people and... no activity
- github - all quiet
Team
All of this project's activity revolves around James Jiang, who has experience working in the management of ZTE and in Beijing Cloud Times Technology Co.,Ltd. The latter is where he poached the entire cloud tech part of the team. Which is a strength of the project, since they've already had ready solutions in the virtualization area (the company held a pretty solid place in Asian markets). That whole part of the team associate themselves with the project in their social media accounts. The company's name morphed from China Cloud Power Technology Co. Ltd. to ubiCom Foundation to Uranus Foundation LTD.
The second half of the team consists of specialists in blockchain tech and in cryptography. Most notable is Zou Jun, who is active at blockchain conference, publishes articles, and is a rather well known media personality in Asia. Alarmingly, the blockchain part of the team joined the project fairly recently (back in March James Jiang was interviewed saying that they are not familiar with blockchain technology and need to urgently make up for that).
Advisors
Rather strong team of advisors, except none of them mention this project in their social media accounts. We are left to guess that possibly the Uranus team met Zishang Wang and Xinhao Lv at the TokenSky conference in Seoul. Zishang Wang is the founder of that conference, and Xinhao Lv participated in it. Also, ubiCom Foundation was a co-organizer of the conference.
Whitepaper
Has three versions: English, Korean, and Mandarin. It's a Technical White Paper.
60 (very long) pages of text, diagrams, and pictures.
Lots of talk about their mission, problems, and how to solve them through their product.
Project summary
Provide massive "ubiquitous computing" power to large projects, apparently by using spare computing power of participating devices. Creates a cloud to compete with Amazon Cloud, Microsoft Azure, etc.
Motivation and outlook
Current cloud computing is too expensive, unsafe, and slow for distributed applications. Most of Uranus' claims are very dubious and generic (oh wow - you're so totally the first ones to think of distributed cloud computing via the blockchain.)
The Uranus solution
This Chinese team claims not just 10 years of experience with cloud computing but also 6 years of "practical experience in the development and successful application of the consensus algorithm developed by public blockchain, proof of workload and development of smart contracts." Really now?
Their "solution" is to build several basic (in their own words!) extensions since blockchain by their own admission is pretty useless to their goal ("The blockchain cannot deliver services."). So they're building a "service delivery channel," "smart contract with rich content," "computing power containers," and more things that sound great to someone with a very high gullibility level. While at it, they are going to "transform the existing cloud native architecture" too.
Token Economy
Found zero info on tokens or ICO. For a project where every owner of a participating device is supposed to benefit from the use of their computing resources, you'd think they would at least bother to invent a token, right?
Architecture of the Uranus System
Uranus mainly comprises a service delivery system and a value delivery system in terms of architecture:
Service delivery layer:
- Ubiquitous cloud infrastructure, and standardized container metering
- Use on demand, automatic settlement
- Blockchain-based, decentralized computing resource pool
- High throughput, high availability, and low cost to meet the needs of modern business
- Container-packaged, dynamically managed cloud native applications
Value delivery layer (Uranus chain):
- Basic chain
- uraTiller module
- Enhanced consensus module
Uranus Chain Technology
System Architecture
The Uranus system builds a decentralized IaaS&PaaS lightweight operating platform and is comprised of:
- uraBlock
- Uranus management engine
- uraTiler
- access gateway
Note that they say nothing specific about any of the above. Just some picture that we're supposed to figure out.
Uranus Chain is comprised of:
- The storage layer (storing info about the node's computing power)
- The network layer (data transfer on the network)
- The protocol layer (consensus formation)
- The extension layer (dApp and smart contract launch)
Uranus Chain Consensus Algorithm
They use a DPOS+BFT hybrid consensus algorithm. Plus, validators to increase consensus security.
Consensus protocol workflow:
- Validator registration (the account needs to have a certain amount as a deposit; to get rid of validator status, can take out the deposit after a certain hold period, about one month).
- A stakeholder may lend the deposit to the validator
- Stakeholders elect validators by voting (thus increasing the weight of the validator)
- Election for qualified validators (the top 48 validators ranked by weight participate in forming the consensus)
- Package Transaction of qualified validators (when processing transactions, qualified validators can exclude other validators if they misbehave)
- Confirmation of transaction information (if more than 2/3 validators vote yes - transaction is approved)
Additionally, the following three attributes are needed to measure the blockchain consensus algorithm:
- Agreement: All the honest nodes form a consistent decision
- Validity: Transactions in the blockchain all come from honest nodes
- Liveness: The consensus algorithm of the blockchain can always form a consistent decision without deadlock even if a lot of qualified validators are offline
uraTiller module contains:
- Tiller virtual machine
- Tiller native smart contracts
- Interfaces of external data
- Logically links on-chain data and off-chain data
Allows the creation of dApps on this platform.
Uranus Computing Power Engine
Capabilities:
- Automatically deploy and reproduce containers
- Flexibly Scale-out
- Manage containers across machines and organize better load balancing of containers
- Dynamically upgrade application containers
- Self-repairing capability
Uranus Smart Contract
Will use two smart contract types:
- explanatory Smart contracts similar to Ethereum (process user transaction information)
- native Smart contracts in the uraTiller module (will utilize all the functions of the platform)
In the Uranus system, Oracle runs as a native Smart contract.
PoC (Proof of Contribution) Mechanism
Machines with a higher contribution value are more likely to be scheduled and allocated to users in priority, thus earning more tokens.
Total contribution value of containers = basic contribution value + service contribution value + chain contribution value
basic contribution value = total production capability of the node (CPU+RAM+bandwidth+storage)
service contribution value = comprehensive score through statistical analysis of the container's past transaction records and user's use feedback.
chain contribution value = for contributing to the voting in support of the system's stability
The platform will also automatically distribute computational load between nodes.
Computing Power Container Engine (CPCE)
The Uranus CPCE provides an application-oriented container cluster deployment and management system. The goal of Uranus CPCE is to eliminate the burden of scheduling of physical/virtual computing, network and storage infrastructure.
Computing Power Container Engine Architecture
The Uranus CPCE management engine draws on Borg's design concepts, such as Pod, Service, Labels, and single-Pod single IP.
Uranus Resource Control Management
Resource Controller manages all the resource-related underlying working.
Multi-Region Scale Computing Power Container Management
When the scale of the computing power containers is large, they manage computing power containers in different regions to lower the management, use, and service costs of each computing power pool.
In the original architecture, a Node manages a Worker node. Here, a Node manages multiple clusters. The Cluster Controller will maintain the health status of every cluster and monitor load status, while the Service Controller will find cross-cluster services.
Computing Power Container Service Discovery and Deployment
- Discovery:
The Uranus engine service is made up of a set of PODs, which are containerized applications. If there is a Worker cluster, which has an internal service called Mysql it's possible to access Uranus engine service through resolution of service names by DNS.
- Deployment:
The API gateway in the system will allocate an X-Traffic-Group identifier to each user request header. According to this identifier, HAProxy will route user requests to different deployment environments. In addition, the automatic health check tool will automatically check whether the HAProxy related to this service is healthy or not and check how many copies each Pod contains.
Affinity and AntiAffinity in Computing Power Container Scheduling
Due to the characteristics of the distributed peer-to-peer system, the Uranus engine enhances Pod scheduling, involving multiple-scheduler configuration changes, node Affinity/AntiAffinity characteristics. Affinity and AntiAffinity are runtime scheduling policies, mainly including:
- NodeAffinity: It is used to specify which node a Pod can be deployed on or which nodes it cannot be deployed on, and solve the problems of Pod and host.
- PodAffinity: It is used to specify with which pods the Pod can be deployed together under a same topological structure.
- PodAntiAffinity: PodAntiAffinity: It is used to specify with which pods the Pod cannot be deployed together under a same topological structure and to solve the relationship between Pod and Pod together with PodAffinity.
Workload Migration and Fault Recovery of Computing Power Containers
uraContainer is used for realizing the pooling of distributed computing resources and the standardized measurement of virtual computing resources. The built-in self-healing mechanism for workload failures in the Uranus system will automatically help the user to select a new target computing power container in the shortest possible time.
Computing Power Container uraContainer
uraContainer is a container running on a hypervisor. It shows strong isolation, lightweight, and fast startup capability while supporting standard image formats.
Worker and Agent Design
Workers are the hosts really run by Pods, either physical or virtual. In order to manage Pods, each Worker node at least should run uraContainer, agent and proxy services. Pod is a set of closely related containers. They share PID, IPC, Network and UTS namespace. Pod is designed to support multiple containers to share network and file systems in a Pod.
Application Scenarios and Ecology System
Application Scenarios
Distributed implementation of general-purpose computing tasks:
- DNA computing
- Computing of planetary orbits
- Analysis of weather data
Projects based on edge computing:
- IoT node management and deployment
- Voting platform
- Advertising
- Games
- Data acquisition
Application Ecology (Block as a Service):
- One-click deployment
- Container image supermarket and App store
Business Application Examples
- Network Data Acquisition
- Voting Platform
- Games
- IoT Terminals
Container Image and Application Ecology System
- Container Image Ecology
- Ecology of the Application Market
Development Roadmap
Phase 1: Community Version (V1) 2018/Q3 beta test and release a community version
Phase 2-Phase 4: Commercial Version (V2, V3, V4) 2018/Q4-2019/Q1 10000, 100,000 and 500,000 users/devices
Phase 5: Ecological Version (V5) 2019/Q2 1,000,000 users/devices
Then they have 10 pages about their "legendary" team, mostly just listing their achievements. Not a word about their token, ICO, or anything like that.
Questions for the project
Q: Who the fuck wrote this White Paper? Formatting is even worse than the content (which is a big accomplishment).
A:
Q: Key concept is nothing new. And there are a lot of ideas in their plans - no focus on a single area. Can they really pull off something so ambitious?
A:
Q: The "minimal balance on the validator" is very strange. What's the point there? To get more tokens into the network?
A:
Q: The token will get devalued a lot as the network grows. How will they solve this problem?
A:
Q: Will there be token buybacks?
A:
Q: Where are their tokenomics? Where is their ICO info?
A:
Conclusion
Experienced team started this project before the ICO boom and is now desperately trying to tie blockchain into it (without a clue about what to do with it). The White Paper is horribly written and formatted: double photos, stretched-out photos, low quality photos, tons of text that says nothing and outright explanations a la "want to know more - read a book." Linked GitHub is empty, though does link to a real one started in 2011 and finished in 2015.
So many errors... looks absolutely unprofessional.
But the team does seem to have real work experience with cloud computing. They do the conference circuit. And their social media accounts are gaining users.
Verdict
Questionable!
Disclaimer: The above audit is not in any way financial advice or a solicitation to buy - it's merely our collective opinion that we are kind enough to share with you. Don't make us regret that.
The report is prepared in partnership with https://t.me/ico_reports
Our links: