Sap odp odata

A subscriber is the consumer of the data. Currently the following subscriber types are available depending on release :.

A context represents a source of ODPs. Context identifiers exist for all technologies whose analytical views can be exposed as ODPs. Currently the following ODP-contexts are available depending on release :.

Get the right component help you to find relative notes and other documents. And in case you need to send customer incident to SAP, right component would help to have the incident to assigned to the right queue.

Child pages. Browse pages. A t tachments 2 Page History. Jira links. Created by Former Memberlast modified on Aug 02, New to Operational Data Provisioning? Having questions regarding your scenario? Start with our Frequently Asked Questions to find the answers on your questions. The pointer is stored in a special timestamp format TSN.

Operational Data Provisioning (ODP) FAQ

The sequential transaction number TSN sorts transactions that write data to the delta queue into a defined sequence, with regard to delta replications. The sequential transaction number is set at the end of a transaction, shortly before the database commit, and then assigned to the transaction ID TID of this transaction. The transaction ID identifies a transaction from the perspective of the delta queue. The ID is assigned to the queue at the start of a transaction, before data from this transaction is written to the queue.

Therefore the ID does not necessarily reflect the commit sequence when two transactions are compared. The commit sequence is only defined when a sequential transaction number TSN is assigned at the end of a transaction, shortly before the database commit.Data created within business processes can be referred to as operational data: business transaction documents, master data, configuration data. Provisioning of operational data has the purpose of making this data available for further processing in other contexts.

This is a well-known topic with a long history for integration scenarios spread across multiple systems. For example, master data are replicated between systems based on harmonized data structures and standardized interfaces. Another area is the classical process integration, where the data provider sends messages via A2A interfaces, always based on the concrete structures of the transaction documents to be processed.

However, as soon as the concrete data structure cannot be cited anymore, so, if basically any kind of application data shall potentially be provided, a more general approach is required, which is referred to as operational data provisioning in the following. Analytics represents such an area consisting of infrastructures, tools and applications that process data made available by source systems.

A lasting assumption cannot be made about the data to be provided, and thus the data structures to be supported, because it is totally dependent on the application use case. On the other hand, analytical software consuming extracted data is a hot spot of innovation with varying needs and evolving approaches how to access data sources. Therefore, data exchange must be based on well-cut services communicating over widely available protocols. Operational data provisioning as described in this document does not claim to be the exclusive approach for data integration between applications.

A decision has to be made case-by-case, the alternatives mentioned above are to be considered. DataSources were the first interface abstraction for heterogeneous application data used for defining data extraction structures and linking with extractor implementations. This approach formed a solid basis for reporting and analysis in a BW system operated as a side-car or as a central Enterprise Data Warehouse hub.

Data extracted from the application DataSources are fed into modeled warehouse layers for data integration and harmonization. The warehouse layer adds analytical semantics and makes data available for multi-dimensional processing.

With the rising demand for operational analytics by embedding analytical capabilities into Business Suite applications, this approach for a unified access to application data was not yet sufficient to address local reporting requirements.

sap odp odata

For example, DataSources with transactional data are not linked with DataSources exposing referenced master data; DataSource field structures don't convey the analytical semantics needed for analytic queries.

This led to the development of the Operational Data Provisioning infrastructure for extracting data from applications built on the NetWeaver platform. With this, the NetWeaver platform architecture introduced a unified data provisioning layer with rich metadata and analytical semantics. Based on this layer, operational reporting embedded into Business Suite 7i could be established by defining analytic queries with multi-dimensional access to ODP data.Do you know … What is ODP?

Or How does ODP work? Or how you can use it? What are the prerequisites? What impact will it have…. Please let me therefore share some important information with all of you:. Operational Data Provisioning provides a technical infrastructure that you can use to support two different application scenarios. The first of these is Operational Analytics for decision making in operative business processes see Introduction to Operational Data Provisioning for more information.

The other very prominent application scenario is data extraction and replication: Operational Data Provisioning supports extraction and replication scenarios for various target applications and supports delta mechanisms in these scenarios. In case of a delta procedure, the data from a source the so called ODP Provider is automatically written to a delta queue the Operational Delta Queue — ODQ using an update process or passed to the delta queue using an extractor interface.

For the many more detailed questions on architecture, scenarios, implementation, prerequisites and availability we have compiled an FAQ document that we want to share with you now.

Please find the full document here:. Feel free to comment on this FAQ list below. We are happy to update the FAQ list, as it is a living document. Good info Rudolf, may be you could give some insight about if an existing datasource can be converted to an ODP datasource?

Is this possible? Yes, it is possible, but it will multiply the data. I have not been able to find a document explaining master data delta using ODP. Is there a way to enable delta for master data? Do you have a reproducible example? What is the source system release? What is the BW target system? Obviously this has license implications…. Yes, you are right. Your current understanding from above is also correct.

Hi Rudolph, thanks for the given information. So does it means that ODP could not be used as a source of data for an Oracle database for example? Which should be the best approach to send data from BW4 to Oracle in a daily basis? Or just a way to know if new records exist? On every run of the update job, new delta records get collected to the delta queue. In RSA7 we can see these new records. In addition the column Total LUWs gives us an indication of whether new records are sitting in the queue.

Two consecutive delta extractions will reset the counter. Then, please click on the queue you are interested in and you will get a list of subscribers in case of a target SAP BW this is all the DTPs that load in active delta mode from this queue. Clicking on one of the subscribers you will then get a list of requests. In this list you can see whether a request has been extracted successfully to SAP BW status: confirmed or whether the request has not been transferred confirmed yet.

Thanks for explaining. I get the idea now, but on our system it do not behave this way. When I run delta update from BW, a new confirmed request is added to the existing list of confirmed requests. This do not allow me to see if data is sitting in the queue.There is so much information and seemingly new technologies available about developing mobile and responsive applications within SAP at the moment it makes it very difficult to know where to start.

How do these all fit into the big picture for an SAP developer? Well let's start at the ground level i. Don't worry too much about the terminology at this stage, all will become clear. Leave everything else as default unless you know you need something specific. I will show you where you would get the error further down if you had used all the fields. This info might just help you understand the error quicker when you are creating one for real in the future.

Once it does click on the one you want to add The next screen shows you the selected service details, enter package details i. Find the one called GetEntitySet Query and right click on it. Simply click ok You will now be taken to tcode SE Save and Activate. The following error text was processed in system TST : The current statement is only supported for character-type data objects.

The error occurred on the application server erpukpltm. You may also need to analyze the trace files of other work processes. If you do not yet have a user ID, contact your system adminmistrator.This step is a prerequisite for our data extraction solution and hence should not be skipped. Here is the high-level flow of the approach:. In the connection tab of the data set, select the option to create a new linked server.

The linked server window will pop-up. In our scenario the output is Azure SQL table. So we have to create a table structure in the database to store the data. Similar to the source dataset configuration, we need to create a linked server for the sink as well. But here the linked server would point to the Azure SQL database. The pipeline can be designed either with only one copy activity for full load or a complex one to handle condition-based delta.

Connect to the Azure data factory V2 and select create pipeline option. The pipeline can be executed through a trigger or by selecting the debug button. The source query in copy activity must be enhanced to handle dynamic expression for handling delta load. We can achieve the delta extraction solution using the below high-level approach. Subscribe to our Newsletter Please leave this field empty Thank you for subscribing to our blogs. You'll hear from us soon. Related Blogs. What are you looking for?

Search for:. Keep up with our latest Blogs Please leave this field empty Thank you for subscribing to our blogs. December 1 November 2 September 1 July 1 June 1.All other sources can exist with any DB. In general, it is based in a certain net weaver installation. There are different types like:. ODQ is really an own physical queue in this framework encapsulating the data for a subscriber. Here the subscriber is the target system. It could be:. We can have one provider with many subscribers.

We can share the same queue for delta which is the main benefit of this use case here. For e. Tcode RSA7 provides the basic overview of delta state of each data source there. In ODP Context it is renovated and it is much more elaborate transaction which allows much better monitor the delta state. Flexible recovery and retention periods: It is also easy to define, a very flexible, recovery and retention periods of each queue. It means, for each source we are bringing or we are transferring, you can define how long the data is being kept there and everything else is then done automatically.

In the following screenshot there is a different source system which is called context for the ODP Case.

Creating your first very simple OData SAP Gateway Service

Here you can create a source system which is then the connection to the source via this context. Here is a list of all source system types from RSA1. In BW 7. Go into the source system and create an ODP data source. Right click on application component and create data source. Choose the name of ODP, so that is the object in the source which provides the data. It would be the same for any other data extractor by the logistic cockpit or financial controlling.

There are other contexts, for example HANA analytic viewsthen you would provide the name of an analytic view here which is the object in the source. Same thing applied for SLT then it would be the name of the table in the source system.

sap odp odata

In extraction tab it shows the same delta processes, methods eg means after images etc. Check the adapter line what type of extraction you are doing.

Data Extraction: Directly from source system. PSA not used. Here we can directly jumping over the PSA table and writing directly into info provider.

sap odp odata

This delta queue monitor is really interesting. If you compare this to RSA7, there is a big difference. You see here again the different sources, the different providers. Here we can calculate the data volume as well; you can see how many data records have been transferred and how many are in the source. Click on calculate data volume Extended View. This is really new. This was not possible before in classical BW.

If you double click on it, you will see now different number of subscribers. We can monitor everything what is happening in the source and target. So this gives you a lot of information about the state of extraction, all the connected systems, all the connected subscribers, all this information is contained in there.

This is administration perspective if you would like to create a data retention period and stuff like that.

SAP HANA Cloud Platform OData Provisioning Creating an Consuming HCP ODP

Nice blog Gunpreet Singh.Does ODP be applicable in this use case. Please suggest. In this cases we are exposing a DataSource with direct access on tables.

My explanation in this blog is focusing on transfering data with ODP to another system. Hi Marc. Please find my answers according to you questions:. Thanks for an informative Blog. My scenario :. However it is not enabling more extractors real-time without SLT. There is no difference for implementation, beside you just use another type of source system and enable once the extractors for ODP usage. This is documented in my blog and in sap help. Yes I would absolutely recommend ODP from scratch.

There is no difference for implementation. Answer found thanks to odp. As of BW 7.

Operational Data Provisioning (ODP) FAQ

Thanks and regards. I am trying to understand the scenario where ODP queue will help in improving the issue with extractor run time for data loads that are performed daily during night by filling the ODP queue frequently, ex. Can you please comment? For the old RSA7 way there are several guide which basically say clear delta queue which means 0 in RSA7 and then proceed which release change.

But there are some DS which have safety intervalls of e. That means in RSA7 you can check if there are really no records anymore. Second thing is that the records are kept for a customizable time period. That bascally means that content has to be activated for two source systems in parallel for ODP and old way to get the ones where no ODP is available at the moment.

sap odp odata

Or maybe something like prefer ODP but if this is not available use the old way. Please advice me how to implement the Inventory model in BW, since this is compleatly different from classic approach.

Mentioned note does not solve the issue. This blog is from Nov. Is this still a limitation even though the extractor is now released for ODP? Can we not go ahead with this approach? Would you please help me understanding how ODP compression works. How does it actually compress?


comments

Leave a Reply

Your email address will not be published. Required fields are marked *